tag:blogger.com,1999:blog-30926474377838301922024-03-12T23:35:11.990-04:00Matthew S. Dippel - The Official BlogMatthew S. Dippelhttp://www.blogger.com/profile/12625887706401392768noreply@blogger.comBlogger76125tag:blogger.com,1999:blog-3092647437783830192.post-28976156800020488302018-10-21T13:45:00.002-04:002018-10-21T13:45:54.762-04:00HOWTO: Install Keybase on openSUSE Tumbleweed with signature verification and update repository<h1 id="howto-install-keybase-on-opensuse-tumbleweed-with-signature-verification-and-update-repository">HOWTO: Install Keybase on openSUSE Tumbleweed with signature verification and update repository</h1>
<p>I’m a huge fan of <a href="https://keybase.io">keybase</a> as well as <a href="https://software.opensuse.org/distributions/tumbleweed">openSUSE Tumbleweed</a> but looking over the installation page, it doesn’t appear that openSUSE is among the supported Linux distributions.</p>
<p>Making it work, however, is only a tiny bit more difficult than setting it up for Fedora (which is to say, it’s not difficult at all). Since I end up doing this rather regularly, I thought I’d throw a HOWTO out there that I can refer back to. If you’re not me, then hey, hopefully I was able to help you out, too!</p>
<p>Note that I run 64-bit openSUSE Tumbleweed, as I assume you <em>probably</em> do, too. These instructions <em>should</em> work for the 32-bit version, however, I haven’t tested them. And, of course, while this works for me, it may not always work and may not work at all for you.</p>
<h2 id="before-we-start---fixing-a-few-things">Before We Start - Fixing a few Things</h2>
<h3 id="problem-1---missing-dependencies-that-cannot-be-resolved">Problem #1 - Missing Dependencies that Cannot be Resolved</h3>
<p>Nevermind, <a href="https://github.com/keybase/keybase-issues/issues/3202">they fixed this for me</a> (love those folks!)</p>
<h3 id="problem-2---the-package-signature-cannot-be-verified">Problem #2 - The Package Signature Cannot be Verified</h3>
<p>They don’t provide instructions for importing their signing key, so we’ll do this with <code>rpm</code></p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token comment"># for users of zsh, the following one-liner works - run it and skip the rest</span>
<span class="token function">sudo</span> rpm --import <span class="token operator">=</span><span class="token punctuation">(</span>curl https://keybase.io/docs/server_security/code_signing_key.asc<span class="token punctuation">)</span>
<span class="token comment"># If you're on bash, '=( ... )' isn't supported, but we can do the same with the following:</span>
TMFILE<span class="token operator">=</span><span class="token string">"<span class="token variable"><span class="token variable">$(</span>mktemp<span class="token variable">)</span></span>"</span>
curl https://keybase.io/docs/server_security/code_signing_key.asc <span class="token operator">></span> <span class="token string">"<span class="token variable">$TMFILE</span>"</span>
<span class="token function">sudo</span> rpm --import <span class="token string">"<span class="token variable">$TMFILE</span>"</span>
<span class="token function">rm</span> "<span class="token variable">$TMFILE</span>
</code></pre>
<h2 id="installing-with-zypper">Installing with <code>zypper</code></h2>
<p>Follow the instructions on <a href="https://keybase.io/docs/the_app/install_linux">the download page</a> for installing under <strong>Fedora</strong>, however, replace <code>yum</code> with <code>zypper</code>. As of the time of this writing, that command would be:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token comment"># Under bash/zsh</span>
<span class="token function">sudo</span> zypper <span class="token keyword">in</span> -y https://prerelease.keybase.io/keybase_amd64.rpm
<span class="token comment"># If you're on the 32-bit version, you'd use:</span>
<span class="token function">sudo</span> zypper <span class="token keyword">in</span> -y https://prerelease.keybase.io/keybase_i386.rpm
<span class="token comment"># ... or install the Windows 95 version (/s)</span>
</code></pre>
<p>And, finally . . .</p>
<pre class=" language-bash"><code class="prism language-bash">run_keybase
keybase login
</code></pre>
<p>Follow the install instructions for logging in for the first time (usually involves running <code>keybase device add</code> on another machine that is already logged in).</p>
<h2 id="making-upgrades-work-when-zypper-up-or-zypper-dup-is-run">Making Upgrades work when <code>zypper up</code> or <code>zypper dup</code> is run</h2>
<p>Keybase is updated … sheesh, it seems like <em>daily</em>, sometimes. But you’re running Tumbleweed, so installing 1,000 updates every week is kind of your thing. Unfortunately, the repo for keybase is not installed by default when <code>zypper</code> installs the keybase RPM, so we have to add it manually.</p>
<p>To get started, you should make sure that you <em>do not</em> have a reference to the keybase repo (who knows, maybe they’ve added support since I wrote this!)</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">sudo</span> zypper lr
</code></pre>
<p>You should see something along the lines of …</p>
<pre><code>
# | Alias | Name | Enabled | GPG Check | Refresh
--+---------------------+-----------------------------+---------+-----------+--------
1 | openSUSE-20181015-0 | openSUSE-20181015-0 | No | ---- | ----
2 | repo-debug | openSUSE-Tumbleweed-Debug | No | ---- | ----
3 | repo-non-oss | openSUSE-Tumbleweed-Non-Oss | Yes | (r ) Yes | Yes
4 | repo-oss | openSUSE-Tumbleweed-Oss | Yes | (r ) Yes | Yes
5 | repo-source | openSUSE-Tumbleweed-Source | No | ---- | ----
6 | repo-update | openSUSE-Tumbleweed-Update | Yes | (r ) Yes | Yes
</code></pre>
<p>That’s from a fresh install that I did on Saturday with nothing added beyond <code>tmux</code> and <code>zsh</code> to the default <code>server</code> configuration pattern.</p>
<p>If you don’t have a keybase repository, let’s add it:</p>
<pre class=" language-bash"><code class="prism language-bash"><span class="token function">sudo</span> zypper ar -f http://prerelease.keybase.io/rpm/x86_64 keybase
<span class="token function">sudo</span> zypper --gpg-auto-import-keys refresh
<span class="token comment"># Presently you get a warning that the repomd.xml is unsigned; allow this, though it's not a great thing if you have security concerns</span>
</code></pre>
<h2 id="performing-an-update">Performing an Update</h2>
<p>Simple! Next time you do a <code>zypper up</code> or <code>zypper dup</code>, keybase’s repository will be checked and the software will be updated if it is modified.<br>
However, don’t forget to run <code>run_keybase</code> after the software is updated or you’ll be using the previous version</p>
Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-8085724760350691432017-09-16T20:02:00.001-04:002017-09-17T17:25:07.466-04:00HOWTO: Strong Name Sign a .Net Assembly With a Yubikey/Smart Card and Code Signing Key with AssemblyKeyNameAttribute<h2 id="or-as-ill-refer-to-it-from-here-on-out-part-2">Or, as I’ll refer to it from here on out, Part 2</h2>
<p><strong>UPDATE</strong>: I ran into a problem that caused all of this to stop working, so I’ve updated the post with what I had to do to resolve that. See <em>Troubleshooting</em> below.</p>
<p>If you haven’t read <a href="http://matthewdippel.blogspot.com/2017/09/how-to-strong-name-sign-net-assembly.html">Part 1</a>, you’ll want to do that now. There’s a few things there that I’ll be referring to here. I’d also recommend going through the steps up to the point of changing your project in Visual Studio. You can skip that part and continue on here.</p>
<h2 id="introducing-assemblykeynameattribute">Introducing AssemblyKeyNameAttribute</h2>
<p>Something I only hinted about in the last post on this subject was the <a href="https://msdn.microsoft.com/en-us/library/system.reflection.assemblykeynameattribute%28v=vs.110%29.aspx"><code>AssemblyKeyNameAttribute</code></a> (go ahead, click that link, see how sad that documentation made me).</p>
<p>It’s the obvious way to handle signing and being a part of your code, makes it easy to solve many of the “Bad News” parts at the bottom of the last post. You could simply create a new build profile and <code>#if/#endif</code> out the entry in <code>AssemblyInfo.cs</code><sup>0</sup>.</p>
<p>Unfortunately, I couldn’t make it work. And then, out of nowhere, <strong>it did</strong>. Shortly after writing and publishing the last post, I decided to add that attribute back in. I ran compile, received my three PIN prompts and … it built. This was odd, since all past attempts yielded a “Keyset not found” error. I figured that my checking the signing box with delay-sign enabled probably yielded the success, so I undid everything. After a lot of google searching, which yielded a whole mess of Stack Overflow and MSDN Forums questions from a myriad of users who hadn’t figured it out, I ended up with the last post.</p>
<p>The long and the short of it is, most of it is unnecessary. The two <em>necessary</em> parts are what follows.</p>
<h3 id="the-trick-to-getting-assemblykeynameattribute-working">The trick to getting AssemblyKeyNameAttribute working</h3>
<p>The <em>most</em> important part is the <code>sn.exe -c "Microsoft Base Smart Card Crypto Provider"</code>. This <em>must</em> be run or you’ll get a <code>Keyset not found</code> error on build.</p>
<p>However, there’s an ugly catch-22. Running <code>sn.exe -c</code>, if it ends up <em>changing</em> the CSP, can only be done as an elevated user. However, that same elevated user <em>cannot</em> access the key in the personal store of your non-elevated account. So simply adding this to the pre-build or post-build and running Visual Studio under elevation results in the exact same <code>Keyset not found</code> error on build. Good error messages can mean the difference between solving the problem and … well, that last post should give you an idea.</p>
<p>Unfortunately, it doesn’t look like the <code>sn -c</code> invocation persists after reboot, so we’re minimally stuck with having to run this command manually once after reboot, or find a way to elevate during build <em>just</em> to run that command. I took the later option.</p>
<p>To get this working, you’ll need the <a href="https://gist.github.com/Diagonactic/8ab27b144149c28d1a15424d53c4f44a">script located at this gist </a>. The only requirement is that you have the .Net Framework SDK 4.6, 4.6.1 or 4.6.2 installed. It uses it to get the path to the <code>sn.exe</code> file so that it’ll work regardless of how your system is configured.</p>
<p>First, make sure your execution policy for <code>CurrentUser</code> is set properly:</p>
<pre class="prettyprint"><code class="language-ps1 hljs lasso"><span class="hljs-built_in">Set</span><span class="hljs-attribute">-ExecutionPolicy</span> <span class="hljs-attribute">-Scope</span> CurrentUser RemoteSigned</code></pre>
<p>You can replace <code>RemoteSigned</code> to <code>Unrestricted</code> if you’d like to be a little less secure (the script in the gist is signed, but if you modify it and don’t re-sign it, it’ll fail on build).</p>
<p>Now go to your project in Visual Studio and head over to the <em>Build Events</em> tab. If you followed the last post and made changes, you should remove everything from the Post Build <em>except</em> for the <code>SignTool.exe</code> calls. This won’t authenticode sign your resulting <code>.dll</code> or <code>.exe</code> – it’s only strong name signing.</p>
<p>Add a pre-build event as follows:</p>
<pre class="prettyprint"><code class="language-cmd hljs tex">PowerShell -NoProfile -Command "<span class="hljs-command">\path</span><span class="hljs-command">\to</span><span class="hljs-command">\Set</span>-SmartCardCspOnBuild.ps1"</code></pre>
<p>Now open up the <code>Properties\AssemblyInfo.cs</code> file and add the following line:</p>
<pre class="prettyprint"><code class="language-c# hljs json">[assembly: AssemblyKeyName(<span class="hljs-string">"Your Key Container Name"</span>)]</code></pre>
<p>If you aren’t sure what your key container name is, consult the linked post, above, about how to find it.</p>
<p>At this point, simply run a build. You’ll notice once build starts, you’ll get an elevation prompt. That’s happening in the PowerShell script. I haven’t figured out a way to get <code>sn.exe</code> (or any other tool) to display he actual CSP that’s in use. Ideally, it’d be nice if it didn’t have to switch it every build to avoid the elevation prompt, but this works until I can find a better way.</p>
<h2 id="troubleshooting">Troubleshooting</h2>
<p>Just as soon as I wrote this on Saturday, the build process stopped working on Sunday. The only thing I had done was switch <code>sn.exe</code> back to the default crypto provider to generate a key so that I could strong name a library that I installed via NuGet which wasn’t already strongly named. I switched it back when I was done, but the build would not work.</p>
<h3 id="apparently-the-snexe-you-use-matters">Apparently, the <code>sn.exe</code> you use matters!</h3>
<p>I had tried to generate the key using <code>sn.exe -k</code> and received an error (the specific error escapes me, now, but I recall that it sounded like the “keyset does not exist” error, which is absurd since I was trying to <em>generate</em> a keyset!). I had noticed that <code>ilasm</code> was being picked up from one version of the .NET Framework and <code>sn.exe</code> was being picked up from another, older version, so I modified my path to use the <code>sn.exe</code> of 4.6.2. No dice. On a hunch, I realized that the Smart Card Crypto Provider might not be able to generate a key This makes sense since the <code>-k</code> command is used to generate a key and put the key <strong>set</strong> in a file. Smart cards can generate a key but can only export the <em>public</em> key part of the key set. I ran <code>sn.exe -c</code> to reset the CSP back to the default (which I assume is the Base Crypto Provider v1). I ran <code>sn.exe -k</code> and it generated the new <code>key.snk</code> for that library without issue.</p>
<p>Then, I re-ran the build. I received no PIN prompt and an error that the keyset was not found (with the name of my container). Oops, I forgot to switch the CSP back! So I went back to my PowerShell prompt and pointed <code>sn.exe</code> <em>back</em> to the Smart Card Crypto Provider. Right when I did it, I realized this wouldn’t work – I was <strong>already</strong> using the script, above, and this should have happened during pre-build!</p>
<p>And, no surprise, it <em>still</em> didn’t work! After a few choice four-letter words, I launched a fresh PowerShell Administrator prompt (which reset my path) and reran the <code>sn.exe -c "Microsoft Base Smart Card Crypto Provider"</code> command. I didn’t expect this to work, but … it <strong>did</strong>.</p>
<p>There were two differences in the second run. My PowerShell Profile injects the Visual Studio 2017 developer command prompt environment which, on my “developer laptop”<sup>2</sup>, resulted in the <code>sn.exe</code> from the .NET Framework 4.6.1 SDK (specifically the <strong>x64</strong> version) being picked up.</p>
<p>Here’s the thing that makes <em>no</em> sense. I used the <strong>x64</strong> version of <code>sn.exe</code> from .NET Framework 4.6.2 when I reset the CSP and it succeeded in <em>breaking</em> the build, but setting it back with that same version <em>didn’t fix it</em>. My PowerShell script, above, uses the <strong>x86</strong> version from 4.6.2 and <strong>that</strong> didn’t fix it<sup>3</sup>. When I ran the <strong>x64</strong> version from 4.6.<strong>1</strong> to set the CSP to the Smart Card Provider, <strong>it fixed it</strong>.</p>
<p>I’m not convinced it has anything to do with the 32-bit vs the 64-bit version of the <code>sc.exe</code> tool, but I think the .NET Framework version mattered. So, at least until I have time to reproduce the issue, it appears that resetting the CSP using <code>sc.exe -c</code> with <strong>either the 4.6.2 or 4.6.1 version</strong> will break things, but <em>only</em> the 4.6.<strong>1</strong> version (possibly only the <strong>x64</strong> version of that) can be used to fix it.</p>
<p>This is made even more fun by the fact that I can’t find a way to have <em>any</em> of the tools indicate <strong>what provider they’re using to perform these operations</strong>, which would make it really easy to see what is going on (and allow for a more intelligent build script).</p>
<h3 id="in-closing">In Closing</h3>
<p>Good grief this is a hassle. <strong>Every</strong> signing tool that I’ve used from Microsoft does things slightly differently. All of the .Net Framework tools inherit <code>sn.exe</code>’s CSP (<code>csc.exe</code> and <code>ilasm.exe</code>, though I’ve been unable to get the latter to work with my code signing key<sup>4</sup>).</p>
<p>From here, I’m going to find a way to discover what CSP <code>sn.exe</code> is configured for so that I can improve the build script and avoid setting that value if it’s already set correctly. I also recall from past experience with Smart Cards that the OS is supposed to support caching of PIN entry with configurable time-outs. There’s some forum posts ut there that indicate this may be broken in Windows 10 after a patch that was released in June, and since I’m running insider previews, I don’t really have a way of backing that patch out, but I’m not so sure I even have things configured to allow caching, so I’d like to get that solved as well. When I do, I’ll post an update.</p>
<p><sup>0</sup> <small>Of course, this was always possible with the modifications we did to the <code>.csproj</code> file, but not many of us enjoy messing around with MSBuilds wonky XML-based language. Even Microsoft had a (short) moment of <a href="https://stackoverflow.com/questions/38536978/is-project-json-deprecated">clarity there</a>. Unfortunately, it was too big of a mess to untangle.</small></p>
<p><sup>1</sup> <small>I only use generated keys for strong naming libraries that aren’t mine. This is partly because it feels wrong to sign a library with <em>my code signing key</em> that isn’t <em>my library</em>, even though the purpose of strong naming isn’t to validate ownership/authorship.</small></p>
<p><sup>2</sup> <small>Read that as “it’s pretty banged up and things aren’t always what they seem” on this device, i.e. <code>ilasm.exe</code> being picked up in a location in my path that isn’t the same as <code>sn.exe</code>, so there could be other things going on here.</small></p>
<p><sup>3</sup> <small>The reason I specifically picked the 32-bit version is because Visual Studio is a 32-bit application and I assumed that was the <strong>right</strong> version to use. I’m still pretty sure that’s the <strong>right</strong> version, but evidence would indicate otherwise.</small></p>
<p><sup>4</sup> <small>Don’t even get me started on <code>vsixsigntool.exe</code>, which is the Visual Studio Extension equivalent to <code>signtool.exe</code>. Yeah, it works differently. So far, only <code>signtool.exe</code> works somewhat intelligently. The others require a lot of trial and error if you want to do things in the most secure manner (not storing the miserable keyset in a file in your filesystem)</small></p>Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-19644692401893607322017-09-15T22:59:00.001-04:002017-09-16T20:12:00.450-04:00HOWTO: Strong Name Sign a .Net Assembly With a Yubikey/Smart Card and Code Signing Key<h2 id="code-signing-strong-name-signing">Code Signing != Strong Name Signing</h2>
<p><b>Note:</b> This post was updated on 9/16/2017 with some corrections. In addition, a <i>much</i> easier way was discovered and that's been written about in <a href="http://matthewdippel.blogspot.com/2017/09/howto-strong-name-sign-net-assembly.html">Part 2</a>. You'll still want to read this part since there's a lot here that isn't covered in the follow-up, but skip the Visual Studio related steps.</p>
<p>First things first, strong naming and Code Signing and Strong Naming serve two different purposes. Code Signing (referred to by the trademarked “Authenticode” in the Microsoft world) verifies a library or executable originated from a person or organization. </p>
<p>A Code Signing key validates identity. Code Signing keys are most similar to EV certificates (and, in fact, require the same kind of validation involving a notary, lawyer or CPA and a bunch of documentation proving your name, address, phone and other things).</p>
<p>A Strong Name in .NET adds a signature to a library or executable that ensures that when a program references that library or executable, that it’s getting the <em>right</em> one. When the .Net Framework validates a library’s strong name, it doesn’t care what the origin of the signature is, or its trust chain. It simply cares that the key is right. So for all practical purposes, there’s really no great reason to use your code signing certificate to generate the strong name.</p>
<h3 id="so-why-do-it-then">So … why do it then?</h3>
<p>My reasons boiled down to a few things. First, I like to strong name my libraries if they’re going to be put out on a NuGet server (either the public one or the one I use internally at work or at home). The main reason is that I use some of these libraries in projects that require strong names, so it’s an added convenience not to have to <code>ildasm</code>/<code>ilasm</code> sign them on a one-off basis. It’s also helpful to others who might use those libraries.</p>
<p>The problem is that I can be a little absent-minded and on at least one occasion, I published a strong name key to a public git repo. Whoops! I know and follow best practices with certificates that I use, but for whatever reason, I treated this “generated on-the-fly” keypair with reckless abandon.</p>
<p>When I acquired my code signing key, I understood the value of what I was getting. My key would serve as a positive identity of <strong>me</strong> when anyone used an executable signed with it. I purchased a Yubikey to store the key. The Yubikey is basically a USB Smart Card device. Smart Cards work by storing the private key in non-exportable storage and performing all cryptographic operations on-device. </p>
<p>I generated the CSR on a Linux live CD and ensured I made a backup only to an encrypted storage medium protected by a different password than the PIN on my Yubikey. In theory, at least, that private key has never seen the Internet, and will never exist in storage or memory of any machine I sign software on.</p>
<p>All of that trouble also means that if I use that same key for strong name signing, I can never <em>accidentally</em> publish the private key in a repo. Like any good security process, the best way to prevent a leak is to make it impossible to leak.</p>
<h3 id="strong-name-signing-with-a-yubikey">Strong Name Signing with a Yubikey</h3>
<p>To strong-name sign, you use the <code>sn.exe</code> tool. Unfortunately, it’s not exactly straight-forward to use this tool with a certificate installed to a Smart Card. It’s, apparently, done so infrequently, that the documentation gives very few hints as to how to actually accomplish it.</p>
<h4 id="step-1-get-the-smart-card-crypto-provider">Step 1 - Get the Smart Card Crypto Provider</h4>
<p>The first thing you need to do is point <code>sn.exe</code> at the right crypto provider. By default, the <code>sn</code> tool uses the <code>Microsoft Base Cryptography Provider</code>, which won’t find the key on your Smart Card. By default, Windows 8+ uses the <code>Microsoft Base Smart Card Crypto Provider</code> for smart cards, but if you’ve installed other smart card providers (OpenSC), this may be different, so we’ll verify that.</p>
<p>Launch an Administrator PowerShell prompt – keep it open because we’ll use it later – but for now, run the following:</p>
<pre class="prettyprint"><code class="language-ps1 hljs tex">CD HKLM:<span class="hljs-command">\SOFTWARE</span><span class="hljs-command">\Microsoft</span><span class="hljs-command">\Cryptography</span><span class="hljs-command">\SmartCards</span>
gci *yubikey*</code></pre>
<p>Look for <code>Crypto Provider</code>. That’s the provider for your yubikey. Open up a (non-administrator) Developer Command Prompt and <code>cd</code> to the folder that has the library you want to sign. Run the following command:</p>
<pre class="prettyprint"><code class="language-cmd hljs ruby"><span class="hljs-variable">$ </span>sn.exe -c <span class="hljs-string">"Microsoft Base Smart Card Crypto Provider"</span></code></pre>
<p>Replace <code>Microsoft Base Smart Card Crypto Provider</code> if the <code>Crypto Provider</code>, above, was different.</p>
<h4 id="step-2-get-the-key-container-name">Step 2 - Get the Key Container Name</h4>
<p>This switches the default provider to your Smart Card. Now for the tricky part. We need to find the Key Container name for your code signing key. I’ve written a script (signed, of course) that will print the key container for your code signing key, provided it’s stored within your user’s Personal key store (which, AFAIK, it needs to be there, so if you don’t have it stored there you’ll need to figure that bit out).</p>
<p>Download the <a href="https://gist.github.com/Diagonactic/4b1983d022902817f0ad952f2da7da03">gist for the script here</a>. Save the file and run it.</p>
<pre class="prettyprint"><code class="language-ps1 hljs mathematica">.\<span class="hljs-keyword">Get</span>-CodeSigningKeyContainer.ps1</code></pre>
<p>It’ll output something along the lines of:</p>
<pre class="prettyprint"><code class="language-txt hljs mathematica">Code Signing Key Located
Subject: CN=Matthew S. Dippel, ...
Thumbprint: <span class="hljs-number">983894</span>AA3EB7BEA35D01248F6F01C3A64117FA66
Container Name: <span class="hljs-string">'c0f031c2-0b5e-171b-d552-fab7345fc10a'</span></code></pre>
<p>Do a sanity check on the Subject/Thumbprint to make sure you’ve got the right key and if you’re happy with it, grab the text between the apostrophes on the last line. In the future (or if you want to use it as part of a script), you can run it with the <code>-Quiet</code> parameter and it’ll just spit out that value.</p>
<h4 id="step-3-generate-a-signing-key-that-contains-only-the-public-key">Step 3 - Generate a Signing Key that Contains Only the Public Key</h4>
<p>Strictly speaking, I’m not sure if this is actually required, but it’s the only way I could get it to work. I’d like to find a workaround, and I’m guessing one exists possibly related to the <a href="https://msdn.microsoft.com/en-us/library/system.reflection.assemblykeynameattribute%28v=vs.110%29.aspx"><code>AssemblyKeyContainerName</code></a> attribute, but much like this whole process, it’s poorly documented and I couldn’t make it work. If I figure it out, I’ll update this post accordingly.</p>
<p>In your protect folder (or solution directory – really, the location doesn’t matter much), run the following command in the Developer Command Prompt:</p>
<pre class="prettyprint"><code class="language-cmd hljs avrasm">sn<span class="hljs-preprocessor">.exe</span> -pc <span class="hljs-string">"c0f031c2-0b5e-171b-d552-fab7345fc10a"</span> key<span class="hljs-preprocessor">.snk</span> sha256</code></pre>
<p>Replace the <code>c0f031c2-0b5e-171b-d552-fab7345fc10a</code> with <strong>your</strong> container name from the PowerShell script above.</p>
<p>What we’re doing here is asking the Strong Name tool to produce the file <code>key.snk</code> with <em>only</em> the public key (after all, we’re using a Smart Card that has no way of providing <code>sn.exe</code> with the private key). We’ve told it to use <strong>SHA-256</strong>, explicitly since (I think) it defaults to <strong>SHA-1</strong> which is considerably weaker.</p>
<h4 id="step-4-tell-visual-studio-to-use-the-key-and-delay-sign">Step 4 - Tell Visual Studio to Use The Key and Delay Sign</h4>
<p>This is the lousy part. We have to delay-sign, which is supposed to kill the debugger. We’ll update the project to finish the signing process once the build is finished, but I’m not sure if that will add some steps to debugging (I just figured this out this evening and haven’t gotten that far in testing, yet). Right-click the library project and choose <em>Properties</em>. Go to the <em>Signing</em> tab. At the bottom, check the box that says <strong>Sign the assembly</strong>. In the drop-down box, pick <em><browse></em> and select the <code>key.snk</code> you generated, above. Then, check the box that says <strong>Delay sign only</strong>.</p>
<p>Build the project. You may notice a small hang after build starts, followed by your PIN entry prompt. Provide your smart card’s PIN and build will continue.</p>
<p>If you see it sitting there for more than a few seconds, hit <em>ALT+SHIFT+TAB</em> and you’ll see your PIN entry prompt pop up (ALT-TAB works, too, but every time this has happened to me, the PIN entry dialog has been the last item in the window list).</p>
<h4 id="step-5-signing-the-library-manually">Step 5 - Signing the Library Manually</h4>
<p>We’ll automate this as part of the build, shortly, but it’s helpful to do it by hand, once, since you’ll see the output for the signing operation in the command prompt and won’t have to hunt through build output. Go back to that Developer Command Prompt and <code>cd</code> to the output folder that has the <code>.dll</code> or <code>.exe</code> file that was generated from the build.</p>
<p>Run the following command:</p>
<pre class="prettyprint"><code class="language-cmd hljs avrasm">sn<span class="hljs-preprocessor">.exe</span> -Rc MyLibrary<span class="hljs-preprocessor">.dll</span> <span class="hljs-string">"c0f031c2-0b5e-171b-d552-fab7345fc10a"</span></code></pre>
<p>Note the change in order between the <code>-pc</code> and the <code>-rc</code> commands. This one does the file first. What you’ve done here is <em>Re-signed</em> the file with the signature. If everything worked, you should have had a window pop up asking for the PIN from your smart card.</p>
<h4 id="step-6-bonus-code-sign-the-libraryexecutable">Step 6 - Bonus - Code Sign the Library/Executable</h4>
<p>As I mentioned, above, strong name signing isn’t Code Signing. If you want your library to be Authenticode signed, you’ll need to do that separately. If you’ve got a Comodo certificate, you can use the following command in the Developer Command Prompt. There’s a few ways to do this, but I use the following command:</p>
<pre class="prettyprint"><code class="language-cmd hljs avrasm">SignTool sign /fd sha256 /tr http://timestamp.comodoca.com/?td=sha256 /td sha256 /as /v MyLibrary<span class="hljs-preprocessor">.dll</span></code></pre>
<p>When you’re using a Code Signing certificate on a Yubikey, provided there’s only one code signing certificate in your certificate store, there’s no need to point it at the specific certificate. You’ll see output indicating that the library/executable was signed properly.</p>
<p>There’s one thing worth noting here, though. If you need compatibility with Windows Vista or Windows XP, you need to sign the executable <strong>twice</strong>. The above method will only work for Windows 7 and above. To sign in a manner that is compatible with Windows XP and above, yet still includes the more secure signature for Windows 7 and above, use the following commands:</p>
<pre class="prettyprint"><code class="language-cmd hljs avrasm">SignTool sign /t http://timestamp.comodoca.com /v MyLibrary.dll
SignTool sign /fd sha256 /tr http://timestamp.comodoca.com/?td=sha256 /td sha256 /as /v MyLibrary.dll</code></pre>
<p>The first command signs in a Windows XP/Windows Vista compatible manner, the second is identical to what we did, above.</p>
<h5 id="what-about-non-comodo-certificates-a-note-about-timestamp">What About Non-Comodo Certificates? A note about Timestamp</h5>
<p>I’m really not sure; I don’t have one. The issue is with that <code>/t http://timestamp.comodoca.com</code> switch. You <em>can</em> leave it off but it’s an <strong>exceptionally bad idea</strong>. Your code-signing certificate <em>expires</em> at some point, or you may lose the private key and need to get another one issued, which will revoke the current one. The certificate you’re using isn’t all that much different than one that is used for EV domains. When those expire, you replace the certificate on the server and everything’s fine. You can’t, however, replace the signatures on all of the things you’ve signed that have been copied onto other peoples’ machines. To address this situation, a timestamp service is used – that’s what this URL points to.</p>
<p>I <em>assume</em> the COMODO timestamp service is meant to be used with COMODO certificates. Chances are good that the company that you purchased yours from operates their own. Consult their site to see what the appropriate values are for that (bearing in mind that there are two kinds of timestamp services that require slightly altered parameters regarding <code>/tr</code> and <code>/td</code>).</p>
<p>There is also one out there that allows for its use provided you make very few requests. Whatever timestamp service you use, make sure you consult the support area to determine what the request limits are.</p>
<h4 id="finally-automating-it-all">Finally - Automating it All</h4>
<p>Having to do all of these steps every build is a bit much. Let’s add some post-build steps to automate it all. Go back to the project properties and choose Build Events.</p>
<p>Put the following into the Post Build (see the note below to make sure you change the right things):</p>
<pre class="prettyprint"><code class="language-cmd hljs tex">"C:<span class="hljs-command">\Program</span> Files (x86)<span class="hljs-command">\Microsoft</span> SDKs<span class="hljs-command">\Windows</span><span class="hljs-command">\v</span>10.0A<span class="hljs-command">\bin</span><span class="hljs-command">\NETFX</span> 4.6.2 Tools<span class="hljs-command">\x</span>64<span class="hljs-command">\sn</span>.exe" -c <span class="hljs-string">"Microsoft Base Smart Card Crypto Provider"</span>
"C:<span class="hljs-command">\Program</span> Files (x86)<span class="hljs-command">\Microsoft</span> SDKs<span class="hljs-command">\Windows</span><span class="hljs-command">\v</span>10.0A<span class="hljs-command">\bin</span><span class="hljs-command">\NETFX</span> 4.6.2 Tools<span class="hljs-command">\x</span>64<span class="hljs-command">\sn</span>.exe" -Rc <span class="hljs-string">"$(TargetPath)"</span> <span class="hljs-string">"c0f031c2-0b5e-171b-d552-fab7345fc10a"</span>
"C:<span class="hljs-command">\Program</span> Files (x86)<span class="hljs-command">\Windows</span> Kits\10<span class="hljs-command">\bin</span><span class="hljs-command">\x</span>64<span class="hljs-command">\signtool</span>.exe" sign /fd sha256 /tr http://timestamp.comodoca.com/?td=sha256 /td sha256 /as /v <span class="hljs-string">"$(TargetPath)"</span></code></pre>
<p><strong>IMPORTANT:</strong> Those are the paths to the files <em>on my system</em>. Check the path to <code>sn.exe</code> and <code>signtool.exe</code> and make sure to replace the <code>Crypto Provider</code> and put in <em>your</em> key container. Mine isn’t going to work for you.</p>
<p>Save everything.</p>
<p>The good news is that for every other project, all that’s required is running the <code>sn.exe -c</code> and <code>sn.exe -pc</code> (Step 1 and 2) once for each project and pasting whatever you ended up with above in the project properties. It’ll make repeating this for anything else you’ve got very easy. It’s also portable between machines (provided the paths are the same, though you can replace those with environment variables for Program Files and such). The key container name will be the same on other machines (there’s some caveats here that relate to having more than one copy of the key or having more than one smart card which I ran into, however, you could use the Key Container script above to get the container name on every build, too).</p>
<p>There’s a bit of bad news, though:</p>
<ul>
<li>You’re going to get prompted for your pin not once, not twice, but <em>three times</em> on every build. I’m not aware of any functionality that allows the OS to cache this operation, but if I find it, it’ll be the first thing I fix since that’s obnoxious.</li>
<li>Your project will not build without your Yubikey or Smart Card. This also means that if your project is open source, people downloading your code will get build errors. Obviously, you don’t want strangers to be able to sign your code with your key, but you do want them to be able to build an unsigned version. Make sure you add a note on how to work around this issue to your <code>readme.md</code> file.</li>
</ul>
<p>Once you’re all done, build the project and type that PIN three times.</p>
<h3 id="troubleshooting">Troubleshooting</h3>
<p>I’ve had a pretty terrible time getting this to work, and ran into a few gotchas. These are from memory and may not be correct, but I’m leaving them here as things to try if you get stuck</p>
<h4 id="it-probably-matters-if-youre-running-elevated">It probably matters if you’re running elevated</h4>
<p>I had a few projects in the past that couldn’t be debugged unless Visual Studio was launched as an administrator. I’m fairly certain that the act of elevation will cause problems in locating the certificate in your personal user store, which is why I specified “A non-administrator Developer Command Prompt” above. If you can’t get the project to build and it complains about not being able to find the key, make sure you’re running without UAC elevation as the user who has the key in their personal certificate store.</p>
<h4 id="more-than-one-smart-card">More Than One Smart Card</h4>
<p>My laptop has a TPM and for convenience, I created a Virtual Smart Card from the TPM module (it’s a cool feature that makes your TPM emulate a Smart Card and it basically negates the need for having a Yubikey). The problem is that the Virtural Smart Card will be the one that’s selected, not the Yubikey, when you set <code>sn.exe</code> to use the Smart Card Provider if the two smart cards share the same provider. I’m sure there’s a better workaround, but since I have a Yubikey, I simply deleted the TPM Smart Card.</p>Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-37222048588088747962017-07-29T14:11:00.001-04:002017-07-29T14:12:58.184-04:00HOWTO: Import Keybase.io Public Keys to SSH authorized_keys<p>A little while back I was looking for a way to add a handful of users to the <strong>authorized_keys</strong> file on some test servers. </p>
<p>This server necessarily required the existence of only one account that when troubleshooting was required, would be used to login/troubleshoot. These servers would be rebuilt every morning and it would have probably been fine to share a password and just login with shared credentials, but the security guy in me is allergic to enabling Challenge/Response authentication. The alternative – sharing a public/private keypair among users – is also a huge no-no<sup>0</sup>.</p>
<p>Unfortunately, where public/private keys were in use, they were generally generated by the users themselves – one of the perks of being at a dev shop with a bunch of folks who seriously know what they’re doing is that they have generally done this ‘correctly’, however, we didn’t have a central server that stored a record of the public keys for easy distribution.</p>
<p>Another side-effect of being at a dev shop is that many of the users were <a href="https://keybase.io/">Keybase</a> users. Unfortunately, Keybase keys are PGP keys, not SSH keys and the two key formats are not inter-changeable. Worse, still, is that they’re really not <em>designed</em> for the same purpose. In the GnuPG world, a key used for authentication would almost always have a sub-key for that purpose. Having been using my keybase key for login to SSH for a while, I’ve had a script (albiet, one that only works with gpg v1) to automate exporting the public/private keypair, making it easy to get the public key to the server with a simple <code>ssh-copy-id</code>, but what about when I have a few users I want to provision without <em>ever handling their private key</em>? I couldn’t find a good reference for doing that so I figured it out on my own.</p>
<h2 id="importing-a-gpg-public-key-without-the-private-key-and-without-installing-the-keybase-client">Importing a GPG public key <em>without</em> the private key and <em>without</em> installing the keybase client</h2>
<p>I wrote a shell script, <a href="https://gist.github.com/Diagonactic/82f4b769291565f14e8485f5827976aa">located here</a>, if you want to skip the details and just run it.</p>
<p>Simply login as the user you wish to add an authorized key to and:</p>
<pre class="prettyprint"><code class="language-bash hljs ">chmod <span class="hljs-number">770</span> ./authorizePublicKeybaseId.sh <span class="hljs-comment"># only needed the first time</span>
./authorizePublicKeybaseId.sh <id> <span class="hljs-comment"># where ID is the keybase ID</span></code></pre>
<p>It requires GnuPG 2 to execute (at least version 2.1.11) because it relies on a feature added in that version.</p>
<p>The script works by grabbing the public key via keybase.io’s public API (beta) and calling GnuPG 2 with the <code>--export-ssh-key</code> (forced with the “!”) to convert the key from GnuPG public key format to SSH public key format.</p>
<p>Because various distributions’ packagers install <code>gpg</code> in different ways, there’s a few checks to figure out which <code>gpg</code> binary is version 2 (often it’s <code>gpg2</code>) and a check to ensure the v2 binary is at the right minor/patch versions to successfully run the script. I also discovered some odd differences in the way that GnuPG 2 behaves between a few distributions – sometimes returning the 32-bit fingerprint rather than the 64-bit fingerprint, so I take an extra step to get the 64-bit fingerprint with some <code>awk</code> parsing.</p>
<p>Currently, this only handles grabbing the public key and it does so without touching the private key (which is something that requires a lot more delicate handling). I’m working on a script to download/import the private key (as well as password protect both the ssh private key and protect it in the GnuPG database). I’ll post that as soon as I’m comfortable that it’s somewhere resembling “safe”, but for the time being, there are several scripts out there that allow you to do this and I’ve tested a few of them against the method I’m using here. They all have worked.</p>
<p><sup>0</sup> I sort of hope I don’t have to explain why, but one <em>big</em> reason is that if one of those employees leaves the company, the shared credential has to be destroyed and removed from every host and a new one has to be issued to all of those users. If one uses</p>Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-76293627834153541112017-07-05T18:45:00.001-04:002017-07-05T18:45:18.230-04:00Resetting the Visual Studio Experimental Instance Visual Studio 2010-2017 via PowerShell<p>There’s a handful of things that you have to do frequently enough when debugging a Visual Studio extension that it becomes <em>almost</em> routine, but not frequently enough for you to actually remember the exact shape of the command you need to run.</p>
<p>Since I got horribly tired of having to hit up Bing every time I needed to remember the specific command, I decided to document some of them here.</p>
<h2 id="the-tldr-use-powershell-to-reset-the-visual-studio-experimental-instance">The TL;DR; - Use PowerShell to Reset the Visual Studio Experimental Instance</h2>
<p>I’ve created a simple script to reset the Visual Studio instance, available <a href="https://gist.github.com/Diagonactic/7e8aaeba8159621b87d2d42bcaa07190">here</a>. It takes two parameters, -Version and -InstanceName (which matches the “RootSuffix” parameter used … most of the time). You needn’t run it from a Developer Command Prompt, it grabs the install locations from the registry.</p>
<h2 id="some-useful-bits-to-remember">Some Useful Bits to Remember</h2>
<h3 id="visual-studio-version-mapping-and-net-framework">Visual Studio Version Mapping and .Net Framework</h3>
<table>
<thead>
<tr>
<th>Marketing Version</th>
<th>Actual Version</th>
<th>Framework Versions</th>
</tr>
</thead>
<tbody><tr>
<td>2010</td>
<td>10.0</td>
<td>4.0</td>
</tr>
<tr>
<td>2012</td>
<td>11.0</td>
<td>4.5.2</td>
</tr>
<tr>
<td>2013</td>
<td>12.0</td>
<td>4.5.2</td>
</tr>
<tr>
<td>2015</td>
<td>14.0</td>
<td>4.6</td>
</tr>
<tr>
<td>2017</td>
<td>15.0</td>
<td>4.6.2</td>
</tr>
</tbody></table>
<h3 id="default-visual-studio-paths">Default Visual Studio Paths</h3>
<p>For these defaults, I’m assuming you’re on a 64-bit operating system. If you’re still stuck banging rocks together on a 32-bit OS, just knock out the (x86) where you see it.</p>
<h3 id="visual-studio-2010-2015">Visual Studio 2010 - 2015</h3>
<p>The paths for these versions have been pretty predictable. They start in <code>%ProgramFiles(x86)%</code>, which usually maps to<code>C:\Program Files (x86)</code> and are stored in <code>Microsoft Visual Studio 1x.x</code> where <strong>x</strong> corresponds to one of version numbers in the <em>Actual</em> column.</p>
<p>Install Root:</p>
<pre class="prettyprint"><code class="language-powershell hljs css">"$<span class="hljs-rules">{<span class="hljs-rule"><span class="hljs-attribute">env</span>:<span class="hljs-value"><span class="hljs-function">ProgramFiles(x86)</span></span></span></span>}\<span class="hljs-tag">Microsoft</span> <span class="hljs-tag">Visual</span> <span class="hljs-tag">Studio</span> 1<span class="hljs-tag">x</span><span class="hljs-class">.x</span>"</code></pre>
<p>… or if you prefer <code>cmd.exe</code>:</p>
<pre class="prettyprint"><code class="language-cmd hljs perl"><span class="hljs-string">"<span class="hljs-variable">%ProgramFiles</span>(x86)<span class="hljs-variable">%\</span>Microsoft Visual Studio 1x.x"</span></code></pre>
<h3 id="visual-studio-2017">Visual Studio 2017</h3>
<p>Things were reorganized a little bit with Visual Studio 2017. The install root is now located at:</p>
<pre class="prettyprint"><code class="language-powershell hljs css">"$<span class="hljs-rules">{<span class="hljs-rule"><span class="hljs-attribute">env</span>:<span class="hljs-value"><span class="hljs-function">ProgramFiles(x86)</span></span></span></span>}\<span class="hljs-tag">Microsoft</span> <span class="hljs-tag">Visual</span> <span class="hljs-tag">Studio</span>\2017\<<span class="hljs-tag">Edition</span>>"</code></pre>
<p>Where <em><Edition></em> is going to correspond to the edition, Community, Professional or Enterprise.</p>
<p>In addition, the <code>RootSuffix</code>, at least on my machine, is only part of the suffix name. This is a fact that Visual Studio understands, but the tool for creating/managing the experimental instances from the command prompt does not.</p>
<p>The PowerShell script provided above will provide you with experimental instance names if you attempt to reset one that doesn’t exist (as would happen if you provided <code>Exp</code> but the name was actually <code>_70a4f204Exp</code></p>
<h2 id="refresh-the-experimental-instance-with-the-script">Refresh the Experimental Instance with the Script</h2>
<p>Basic help can be found by typing <code>Get-Help ResetExperimentalInstance.ps1 -Full</code>, but here’s how you use it:</p>
<pre class="prettyprint"><code class="language-powershell hljs xml">.\ResetExperimentalInstance.ps1 [-InstanceName] <span class="hljs-tag"><<span class="hljs-title">InstanceName</span>></span> [-Version <span class="hljs-tag"><<span class="hljs-title">Version</span>></span>]</code></pre>
<p><strong>Version</strong> - Optional - If you have only one version of Visual Studio installed. Note that this includes applications that <em>use</em> other versions of Visual Studio, like SQL Management Studio and System Center Configuration Manager’s management tools. If you have more than one version installed, the script will exit but will print the versions that are available.</p>
<p><strong>InstanceName</strong> - Required - <em>Usually</em> the same as what is provided as the <code>/RootSuffix</code> parameter in the Debug panel within Visual Studio for your extension. However, it may actually be <code>_[some 32-bit Hex][RootSuffix]</code>, i.e. <code>_71af83c4Exp</code> for the <code>Exp</code> instance. If a corresponding folder for that instance is not found, you’ll be given a list of all of the instances that are found for the provided version and prompted as to whether or not you want to create a new experimental instance.</p>
<p>The <code>_</code> in the long name is required for the Visual Studio provided tool, <code>CreateExpInstance.exe</code>, which the script uses. However, the script will look for a folder that only differs by the starting <code>_</code> and will correct your InstanceName if that’s the only difference.</p>Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-85303560594477638792016-11-29T22:58:00.000-05:002016-11-29T23:01:01.985-05:00HOWTO: Pair an Intermatic InTouch CA5100 Accessory Switch to SmartThings<h5>Problem</h5>
<p>You found a good deal on an <a href="http://amzn.to/2gHEjBw">Intermatic InTouch CA5100 Accessory Switch</a>, you installed it and consulted the manual to set it up only to be pointed at another manual -- not included in the package -- for pairing instructions. Hopefully, you also noticed that this is a switch that doesn't actually control anything; it simply sends Z-Wave commands and reports its status (for people like me that have a regular outlet where it would have been really convenient to have a light switch).</p>
<p>You've searched the internet and you have no doubt discovered that Intermatic appears to have decided to pretend they never created this product. There's no reference to it on their web site and several URLs that once pointed to manuals on a different site run by Intermatic now just redirect to their homepage. Awesome.</p>
<h5>How to Pair</h5>
Well, first get it all wired up. Hopefully the LED is cycling Red<->Blue. One small caveat - when pairing, the reception of the switch is significantly less than when it is functioning normally. You may need to move your SmartThings hub closer to the device to get it to work. Open the app and choose Add a Device. (I can't confirm this -- mine was in the same room -- it's just something I read several times). You can move it back where it was when you'e done. It'll start searching for devices. Hit "Up", "Down" and then press both buttons on the switch at the same time. It should show up as a Generic Z-Wave Device. Add it, give it a name and you're part way there.
<h5>Making it work</h5>
<p>You're not quite there, yet. Though the device is recognized, SmartThings doesn't know what it does, yet. Visit <a href="https://community.smartthings.com/t/intermatic-ca5100/5746/4">this helpful forum post</a> for a groovy script that can be added using <a href="http://ide.smartthings.com/">the IDE</a>. After you've published the script, you can visit the <b>My Devices</b> tab, select the switch and change its type to Intermatic CA5100.</p>
<p>I'm writing this post mostly for myself because I know at some point I'm going to have to do this again and after having spent about an hour trying to find some hint as to how to pair this thing, I can only imagine it'll get harder in the future. In fact, the only reason I discovered this at all was because I went to the Amazon product reviews and found someone who had mentioned the pairing procedure in passing during his review. Once that product is gone, I don't expect that information to be there any longer. As far as the device, itself -- it was inexpensive compared to others and fit my needs perfectly. Now that it's working, I have no complaints, but failing to include a manual with three simple steps to pair the thing is a pretty big omission and likely caused endless support calls.Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-16364181395311227372016-09-26T18:18:00.000-04:002016-11-06T21:37:47.621-05:00HOWTO: Getting the HubPiWi Blue kernel moduels installed on Raspbian Jessie<h5>Background</h5> <p>The HubPiWi Blue is an add-on for the Raspberry Pi Zero that gives you three USB ports, and a combined WiFi/Bluetooth adapter (RealTek chipset). If you poked around after install, you likely noticed that there's a module for a Realtek device already detected and running. We're part of the way there, unfortunately, it's only the "Bluetooth" part and I'm not even sure that it truly works. Thankfully, RealTek provides module sources for a Linux driver and they were great with Raspbian (with a few Makefile tweaks). Unfortunately, this means we'll <em>not</em> be able to update the kernel after this without repeating a lot of these steps again.</p> <h5>Initial Setup</h5> <p>I’ll skip the basics except for this: Get an SD card, load the latest Raspbian image onto it and pop that into the Pi Zero’s available card. You’ll want a monitor/keyboard handy or an FTDI adapter. And you’ll need some time – my recommendation: line up some chores to do during each of the major steps and you’ll get a bit done while you’re waiting.</p> <h5>What you’ll need</h5> <p>If you want to follow this post exactly, here’s what you’ll need. I’ve included notes about alternatives where I can, but I’m doing this as I write, so I’m providing what I used to make it work.</p> <ul> <li>A Raspberry Pi Zero <li>A HubPiWi Blue <li>USB power and Micro USB cable <li>An HDMI display and Keyboard or an <a href="http://amzn.to/2cQ87d1">FTDI Adapter</a> <li>A Wireless network you intend to attach to.</li></ul> <h5>Before We Build the Driver</h5> <p>As always, run the following commands:</p>
<pre class="brush:bash">sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
sudo reboot</pre>
<p>It’s a Pi Zero, so it’s going to take a while. To its credit, it performed faster than my original Raspberry Pi. And for $5 (or if you were one of the lucky ones that grabbed one at Microcenter for a buck), you can’t really complain.</p>
<p>I used the non-LITE version of Raspbian and ended up with 122 packages that needed updating, which took around 35 minutes.</p>
<h5>Building and Installing the WiFi Kernel Module</h5>
<p>The HubPiWi uses the Realtek 8723BU Chipset and the same Bluetooth module found in the 8723AU. Luckily, there are kernel modules for these. I've created a fork of the WiFi driver repository and modified the Makefile to allow for an easy build on Raspbian, so we'll clone my forked repository and use that to build the module.</p><pre class="brush: bash">cd ~/
git clone https://github.com/Diagonactic/rtl8723bu.git</pre>
<p>The driver is kernel version specific, so we need to get the correct linux headers.</p><pre class="brush:bash">sudo apt-get install raspberrypi-kernel-headers</pre>
<p>Time for some more chores. This clocked in at about 10 minutes.</p><pre class="brush:bash">cd ~/rtl8723bu
make </pre>
<p>This will run about 30-40 minutes. When you’re done, you’ll have a compiled driver and be ready to install: <pre class="brush:bash">sudo make install </pre>
<p>At this point you can either reboot with the “reboot” command or type the following: <pre class="brush:bash">sudo insmod 8723bu.ko
ifconfig
</pre>
<p>You should see your wlan0 device ready to go! Of course, you still need to configure it to attach to your network. There's a variety of ways to do that so simply search away and set that up and you'll be connected. That’s fairly easy and there’s several articles on how to do that. This device also comes with Bluetooth support, so follow the remaining instructions to get that working if that’s something you’re interested in.
<h5>Building and Installing the Bluetooth Kernel Module (Optional)</h5>
<p>Since it’s always a good idea to only enable the features you’re actually going to use, if you have nothing to pair the device with or no use for Bluetooth at this time, you can skip this. These steps are Bluetooth specific and doing them will not improve or affect your ability to get going with WiFi, which is more-than-likely what you wanted to get working, anyway.</p>
<p>The modules above do not include the Bluetooth part, so for that we need to grab and compile a new module.</p>
<pre class="brush: bash">
cd ~/
git clone https://github.com/lwfinger/rtl8723au_bt.git –b kernel
cd rtl8723au_bt.git
make
sudo make install</pre>
<p>Note the “-b kernel” on the git clone command. If you fail to include this the make command will not work (and will instruct you to grab the kernel branch, which is why we’re including the –b kernel above). This will take about 15-20 minutes on the Pi Zero, so kick back and drink some more coffee.
<p>Now all we need to do is insmod a few modules and we’re in business</p>
<pre class="brush: bash">
sudo insmod btrtl.ko
sudo insmod btintel.ko
sudo insmod btbcm.ko
sudo insmod btusb.ko</pre>
<p>Assume you received no error messages, we can now verify that Bluetooth is working:
<pre class="brush: bash">
sudo bluetoothctl</pre>
<p>You should see something along the lines of “[New] Controller xx:xx:xx:xx:xx:xx yourhostname [default]”
<p>That’s it! Pairing devices with this controller is done the same way it would be done with a Raspberry Pi 3. If you need instructions on that, I’ll leave you to google away since that may change as time progresses (but should always be the same process for this device as it is with others).
<h5>Bluetooth Device Compatibility</h5>
<p>If you chose to install the Bluetooth Kernel module there’s a small but important disclaimer. Bluetooth implementations can be hit and miss. Despite having a “certification” component that’s <em>supposed </em>to mean that a Bluetooth device will operate with anything else that’s certified (indicated by the Bluetooth logo being present on your device), this is often not the case in the real world. I expect you’ll have no problems with any Android or iOS device, however, if you’re trying to pair with something a little more exotic, like an older Microsoft Windows Mobile phone, Microsoft Branded “Sync” stereo, or other stereo/TV, you may have problems (I’m not picking on Microsoft, directly, here, these are just devices that I have owned that I’ve had Bluetooth pairing problems with in the past). Bluetooth keyboards, for instance, are notoriously painful to get to pair properly (you may want to use the GUI tools to do this, though it will be very slow on the Pi Zero)
<h5></h5>
<h5>Ongoing Maintenance Note - IMPORTANT!</h5>
<p>If you've been messing with Raspberry Pi hardware for a while, you'll recognize those first few steps as common steps for updating the Pi.</p>
<p>Nearly every bit of "help" will point you at these steps as a "do this first" (keeping software up-to-date is always a good idea). The issue, though, is that this will <em>sometimes</em> install a new kernel version. When that happens, the new kernel will not have a module for your WiFi adapter. Not to worry: simply repeat the process from <strong>Building and Installing the WiFi Module</strong> after a new kernel comes down and you'll be back up and running.</p>Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-42553529646857135912016-07-14T22:51:00.001-04:002016-10-04T10:08:47.102-04:00Printing with PC Plus Polycarbonate Filament on a Maker Select v2<p>Several months ago I purchased a Maker Select 2. I believe my quote, most recently, was “It’s been a few months and the 3D printer is still the coolest thing I’ve ever owned.” It’s also, sometimes, one of the most frustrating.</p> <p>My current project is printing a multi-extruder printer (aiming for 4, but starting with 2) using a mash-up of a few different designs and I’m working on a Bowden extruder. Since I had the need to print some very strong parts for another project, I picked up <a href="http://amzn.to/2dFBUpw">some PC-Plus</a> after a bit of research.</p> <h5>This stuff is stronger than I’d imagined</h5> <p>Since I’ve had, now, four failed prints with this material, I’ve had an opportunity to test the physical properties. My tests indicate the most optimistic of the marketing materials is spot on. My PLA+ extruder body was able to be cracked pretty easily with a rubber mallet on my cement basement floor. The ABS and Nylon parts were pretty solid but one of the components was designed in such a way that the flexibility of those filaments was going to be a problem. </p> <p>This stuff was hard to break with a full-on metal hammer. It’s far less flexible than ABS or Nylon and the point it which it <em>will </em>flex is (in my unscientific estimation) about twice the pressure it takes to completely snap a PLA print. I’ll admit, it was kind of fun hammering the crap out of the part seeing how hard I’d have to hit it to get it to crack. About the only bad thing I can say is that it did dent pretty well but it dented at well beyond the point other parts would have broken.</p> <h5>This material is the stuff that profanity is made out of</h5> <p>I’ve yet to run into a filament that is more difficult to print properly with. I’ve printed with PETT, PETG, T-Glase, ABS, PLA, PLA+. It’s safe to say that it takes all of the difficulties of each of these and combines them into one magnificent package. It’s very temperamental with regard to moisture as evidenced by the fact that it shipped in a vacuum sealed pack with a zip-lock seal for easy re-packing. Luckily we run the air conditioner here like it’s the arctic.</p> <p>It curls. No, I mean, like 80s perm psychotic curls. They used to ship it with a square of BuildTak. The manual and nearly everywhere you read says it is *required*. Thankfully, it’s not if you’re creative.</p> <p>To get that strength, you need to run your hot end at 260 degrees or higher (that’s as high as the Maker Select 2 goes, so that’s what I’m stuck with) and you need to print slowly (details below).</p> <h6>The BuildTak Option</h6> <p>The Maker Select v2 shipped with a BuildTak clone of some kind attached to the metal heated bed. I say “Of Some Kind” because this was one of the first things I removed from the printer since the PLA and Nylon I was printing with seemed to just bounce off of the surface of whatever this 3M product was. After trying a few things that worked well for me in the past, I gave up and purchased 3 sheets of BuildTak.</p> <p>Let me just say: I hate this stuff. Perhaps that’s strong sentiment borne out of hours of frustration with this filament more than it is a scathing rebuke of BuildTak, but I’ll never buy it again. </p> <p>The first problem is that it works a little too well. PC-Plus sticks to it like super glue and removing the part rips the surface off of the BuildTak. It’s difficult enough getting the bed leveled perfectly to the factory recommendations but now you have to figure out just how higher you need to level it in order to get the part to stick properly, but not <em>too </em>well. That’s assuming the BuildTak doesn’t just pull itself right off of the glass due to the heated bed weakening the adhesive. Since I purchased the 127mm by 127mm sheets, I was printing the part right on the edge of the BuildTak and that’s exactly what happened to my second print. </p> <p>Second, and this might be a matter of me improperly cleaning the surface, but I was only able to use a non-ripped sheet <em>twice. </em>After that, it simply stopped sticking no matter how close I printed. </p> <p>Third, geez this stuff is <em>expensive!</em> Three of those tiny sheets was almost $10.00. The idea that I’d get about 6 prints for that price didn’t sit well with me.</p> <p>Lastly, I prefer to print directly on the glass because it makes the part <em>look </em>nice. BuildTak has a rough surface and it shows up in the finished product. That wasn’t so important for these parts (just looking at my printer with gray, green, clear, pink (!) and black parts indicates I don’t care what it looks like, I just want it to be durable and functional).</p> <p> </p> <h5>(Mostly) Ignoring the Manual</h5> <p>I’ll be the first to say that much of the manual’s recommendations worked fine except for the 0.33mm gap between the raft and the part (which resulted in a “3D printed turd” stuck to the hot end since it wouldn’t adhere to the part below). I’d imagine part of that had to do with the fact that I can’t get it up to a higher temperature with this printer.</p> <p>I really hate printing rafts. Watching the filament burn down as it drops a surface on top of a surface that should be sticking already only to throw that part in the trash (or if it’s ABS, store it to make more glue) is wasteful. Then there’s separating the part from the raft, which, since I rarely print rafts, I haven’t quite gotten right yet. It’s either sticking so hard that I have to risk breaking the part to remove it or it fails to stick at all.</p> <p>I was able to get it to stick <em>perfectly, </em>though by throwing out most of the recommendations and using a few settings that I had used with T-Glase and other finicky materials.</p> <p>First, clean the hell out of your bed (91% alcohol does the trick). Level your bed to the factory recommendations and make sure it’s absolutely perfect.</p> <p>Heat up the extruder to 260 and clean any filament from the last seven failed prints off of the hot end so they don’t become a magnet for the stuff it’s already laid down. This stuff sticks well to virtually nothing except for itself and while printing, if you get any strings, they’ll gob up and start removing portions you’ve already printed.</p> <p>Heat up the bed to 90 degrees and apply a nice layer of Elmer’s Glue Stick. Follow that up with a reasonable amount of ABS Glue (google it, it’s easy to make). Let everything dry.</p> <p>I used a 0.1mm gap on the raft when I printed with a raft (I’ve not had to since I cracked the formula for getting this to print properly). Slow your printing down. I went to 40mm/s with 20mm/s for the bottom layer and outer layer. I also stuck with a 40% fill, though this was more because the part required it. I also used four solid layers top, bottom and sides. It may be overkill on that, but several forum posts recommended it so I started with those settings.</p> <p>If your printer goes higher than 260 degrees, try going higher. I had layer adhesion issues under 260 but still occasionally ran into small sections that didn’t adhere properly at that temperature. That bit about making sure the head is clean is very important. Every one of my layer adhesion issues occurred because the head picked up a string, which picked up small bits of printed material as it went along until it got large enough to get snagged somewhere in the printed body and was deposited, causing the bed to sink slightly as the head passed over. This resulted in a small gap in a spot on the print. Those small gaps are enough to make a very strong part pathetically vulnerable to snapping. I resolved most of these issues over a few prints by slowing them down to the point where the head was adequately melting any smallish gobs as it passed over them and increasing the retraction by a factor of two. Many forum posts recommended going as high as 290 degrees, which I’d imagine would allow print faster and let gravity resolve some of the issues when gobs appear but the printer I was using only allows me to get to 260 degrees.</p> <p>For tall parts, consider taking a few of those Amazon boxes apart and making an enclosure. This will keep the temperature consistently higher while printing and reduce curling.</p> <p>Using these settings, and the ABS Glue plus Glue Stick, though, made the part stuck so hard to the glass I had to use a razor blade to separate it. There was zero curling on a part that had several little finger-like points on it (and failed to print properly on anything else with this stuff) so I’m fairly convinced this is the way to go for me from now on. YMMV</p> <h5>Things that <em>didn’t </em>work</h5> <p>3D printing is often about experimentation to get the <em>easiest </em>process to produce consistent prints with a material. The only thing these attempts did was consistently produce curled parts or 3D printed turds. All of these were attempted directly on glass.</p> <h6>Hairspray</h6> <p>As is common with ABS, hairspray seems to make the curling worse. It will stick initially, but after several layers, it’ll start to curl upwards. If you’re lucky, the print will stick somewhere and you might have a salvageable part if you don’t care about the looks and the curling occurs on part of it that doesn’t affect its performance. I was printing an extruder so there’s very little of it that can be off.</p> <h6>Glue Stick (alone)</h6> <p>Initially it stuck and didn’t curl as much as the hairspray did. This might have worked were the part much smaller but on the extruder body it failed after about the 20th layer, pulling completely off of the bed. I tried this twice and I’m fairly certain it’s not a good solution.</p> <h6>Elmers and Water</h6> <p>Performed similarly but worse than the glue stick</p> <h6>BuildTak after Two Prints</h6> <p>The surface might as well have been covered in olive oil.</p> <h6>That 3M BuildTak like thing that Maker Select ships with</h6> <p>They gave me an extra one which I put in a box since the one that it came with performed so poorly but I thought I’d give it a shot. It worked as well as it ever has which is to say, not at all.</p> <h6>ABS Glue (alone)</h6> <p>I’ve read some forum posts from people claiming they simply applied ABS glue to the glass and were printing successfully with other materials. Perhaps mine is too diluted or I’ve done something wrong, but it’s never worked for me even with ABS filament. This was no exception.</p> <p>I also tried mixing a few variants of these, especially the hairspray since even though it seems to encourage more curling, if it gets a good stick initially, it won’t budge with other filaments I’ve tried. The only mix that worked was Glue Stick and ABS Glue, though to be fair, I didn’t try Elmers+Water and ABS Glue and I have a feeling this would have worked. It’s just less convenient waiting on the Elmers to get tacky enough to begin printing.</p> <p>I’d love to know what anyone else has tried or if there’s a better method/something I’m missing but it’s a relatively new filament, has lots of pain points and isn’t experimented with enough to find great information from the forums so I’ve been at a loss for any solid help (which was one of the motivations for getting off my butt and writing this). If you’ve had success with this material using other tricks, please comment!</p> <p>Many thanks!</p>Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-55462918900873783822016-04-02T12:11:00.000-04:002016-04-02T12:11:30.895-04:00Sheraton, Bedbugs and how NOT to do Customer Support<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/nUzC89rBYxw/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/nUzC89rBYxw?feature=player_embedded" width="320"></iframe></div>
<h5>
</h5>
<h5>
Adventures in Bed Bugs</h5>
I've traveled enough to know that bed bugs are always a risk at any hotel stay. Generally, I do a pretty reasonable check of the room on arrival. When I find them, I insist on being put up in another hotel. Finding bed bugs is usually difficult. They hide well and the best approach is to look for fecal matter (little black/brown spots -- blood of past guests -- on the bedding and box springs).<br />
Usually, I won't leave bad feedback on one of the travel sites, tweet, or otherwise publicly shame a hotel. Very few things are more damaging because people have no idea how common of a problem it actually is (if they knew, they might just stay home and skip traveling all together). And usually the property handles the situation well. This time is different. I've never had a hotel and hotel chain so completely disregard such a serious problem.
<br />
<h5>
Sheraton Lake Beuna Vista Hotel, Orlando, Florida</h5>
We failed to notice the problem until our very last day, and considering how bad it was I can only plead laziness on my part. When my wife and I arrived and saw the room adorned with bright white sheets, bedding and even comforter, we were lulled into a false sense of security. We stayed there 4 days and on the final day, my wife shot out of bed after she scratched at her leg and found the small sting had a source -- a bed bug.<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMiplFeutih7SxgYLXFDRwWli32AwBiC0wC26vmcmNPPKwFGI6LM_0htr0brQf2zLhgkSAau-kyW0MPWcGB4ttYDsJpuGqSZbLQDjNHt_if64HlXkzAHyHMu0D3o1o7LOPJh5FX1FcA_7y/s1600/IMG_20160310_081743.jpg" imageanchor="1"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMiplFeutih7SxgYLXFDRwWli32AwBiC0wC26vmcmNPPKwFGI6LM_0htr0brQf2zLhgkSAau-kyW0MPWcGB4ttYDsJpuGqSZbLQDjNHt_if64HlXkzAHyHMu0D3o1o7LOPJh5FX1FcA_7y/s320/IMG_20160310_081743.jpg" /></a><br />
It was about 8:00 AM and I had been looking forward to sleeping in a little bit after a hard week of work at a conference. That idea was instantly shattered. My wife grabbed one of the Zip Lock bags we put our toothbrushes in and captured the bugger. We brought it down to the front desk and returned to the room to get some more photos. It didn't take but a few seconds to find the second bug hanging around the head board.<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi11rp5c-Aq0VAm2KMIomJ9FP1UepO3E8LPBFKkjewm46lDtVSmhCFei_y_sygq85uqaer5fGF8kCkcLs1p207oCqYuYGwbJw3bPTJ1ePZxRqmvGYDIxjOP8gBbVVk5KP1EaoWG0QsVn3PP/s1600/IMG_20160321_125259_01.jpg" imageanchor="1"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi11rp5c-Aq0VAm2KMIomJ9FP1UepO3E8LPBFKkjewm46lDtVSmhCFei_y_sygq85uqaer5fGF8kCkcLs1p207oCqYuYGwbJw3bPTJ1ePZxRqmvGYDIxjOP8gBbVVk5KP1EaoWG0QsVn3PP/s320/IMG_20160321_125259_01.jpg" /></a><br />
Up came the mattresses. What we discovered was <strong>nauseating</strong>.
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGzbHkrY8XznH_3FcIsst5d3mDTi5PPa119EOVn1szm7nF1l-pWSK4ziH-zysf_A0H1o9TxqVU4J2B0AhMN9Sqz8gFyXME_PCzeDIB1waz09RXzsYHJfAXAzvWjFwc0IRoXLt74XsgPAYk/s1600/IMG_20160321_125251_01.jpg" imageanchor="1"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGzbHkrY8XznH_3FcIsst5d3mDTi5PPa119EOVn1szm7nF1l-pWSK4ziH-zysf_A0H1o9TxqVU4J2B0AhMN9Sqz8gFyXME_PCzeDIB1waz09RXzsYHJfAXAzvWjFwc0IRoXLt74XsgPAYk/s320/IMG_20160321_125251_01.jpg" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPmPzhfNAj8fMzPLdkehefm6wJ2jrAyA91S9Qo1xm1_Q03lJePDtEsa4m62Kl1D0CXUbsydid7JvUpt3AKLufTKCDxhnUKmqye49t4v-r_tMTIxZl9H0LAy7Dt7pQdHCu1IWYk8S0cACRf/s1600/IMG_20160321_125317_01.jpg" imageanchor="1"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPmPzhfNAj8fMzPLdkehefm6wJ2jrAyA91S9Qo1xm1_Q03lJePDtEsa4m62Kl1D0CXUbsydid7JvUpt3AKLufTKCDxhnUKmqye49t4v-r_tMTIxZl9H0LAy7Dt7pQdHCu1IWYk8S0cACRf/s320/IMG_20160321_125317_01.jpg" /></a><br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh44EuiMZHg1qYQP5bThyphenhyphen49LdB2O7ixPwxI_68lIEpCZdzRvQ3RRzprA622OCrDTtM7ZwuLTRwBgBPSLPrIYJm88ayzZADvLnpckVRMNTq7aNWtTyz_mp7IO7gTZAiE9Jn-kxGu8_sPKt3j/s1600/IMG_20160321_125255_01.jpg" imageanchor="1"><img border="0" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh44EuiMZHg1qYQP5bThyphenhyphen49LdB2O7ixPwxI_68lIEpCZdzRvQ3RRzprA622OCrDTtM7ZwuLTRwBgBPSLPrIYJm88ayzZADvLnpckVRMNTq7aNWtTyz_mp7IO7gTZAiE9Jn-kxGu8_sPKt3j/s640/IMG_20160321_125255_01.jpg" width="476" /></a><br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-0CVfu0Ygq_20TXYcnZdfVQuM9CUHHMuzv02n7wV6dj1601mdAbFSLVBXsGEN1c6HBdr6PICclwwuCM65sWVb6uThHhr2RIXORfdKqvzHHNTMR7X0HVtYzuIII8ChCafVC4BVfosTvC-V/s1600/IMG_20160321_125305_01.jpg" imageanchor="1"><img border="0" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-0CVfu0Ygq_20TXYcnZdfVQuM9CUHHMuzv02n7wV6dj1601mdAbFSLVBXsGEN1c6HBdr6PICclwwuCM65sWVb6uThHhr2RIXORfdKqvzHHNTMR7X0HVtYzuIII8ChCafVC4BVfosTvC-V/s640/IMG_20160321_125305_01.jpg" width="478" /></a><br />
Those "fecal matter" inspections, had I bothered to do them, would have immediately had me leaving the room. On top of fecal matter, we found four bugs that were smashed under both beds' box springs. My assumption is that the infestation had been present in the room when things were being moved around and nobody in housekeeping had noticed. Considering the large, smashed, blood stains and pieces of dead bug that were <strong>very</strong> easy to see, I'm terribly disappointed that staff paid so little attention, though after my experience dealing with Starwood in general, I can't say I'm surprised. Housekeeping should be trained to identify fecal matter and other secondary evidence of the presence of bed bugs and should never have allowed a guest to occupy a room suspected of having an infestation. Had the hotel been paying any reasonable amount of attention, though (or had I), they would have seen the critters actually <strong>walking around</strong>.<br />
<h5>
Working with the hotel</h5>
The hotel staff kept our luggage and gave me a new bag to place that I could use to bring the three computers I had traveled with home. They assured me the items would be returned to me via FedEx with the clothes heat treated. They explained that the other items could not be heat treated but did not explain what would be done with them. We had to fly out that morning so I couldn't stick around long to ensure things were handled correctly so we trusted the hotel. In retrospect, I should have been less trusting.
<br />
<h6>
Lesson 1: FedEx doesn't mean fast, anymore</h6>
Years ago when someone said "we'll Fedex that to you", it meant overnight. The delivery business has changed a lot--a fact I know well--but that didn't stop my mental picture of receiving my belongings in a reasonable amount of time from kicking in. I mean, obviously, these are my things and I expected to go home with them the night I returned so I assumed the hotel would understand the urgency of returning my belongings quickly. I don't travel enough to warrant buying "two of everything". The hotel sent our luggage back FedEx ground, which took a week. All of my wife's makeup was in the bag as was my shaving products. We had to buy replacements.
<br />
<h6>
Lesson 2: Expect others to treat your stuff with little respect</h6>
I kind of expected my suit coat was going to need a pressing after its week-long journey. But I didn't expect my belongings to be destroyed. Because they placed my wife's makeup, a pen, and the rest of our toiletries in an unsealed bag, we found a mix of interesting stains on many of our clothes. My suit coat has a large black ink stain and pinkish-brownish sain from hairspray-mixed-with-pen-and-makeup. Most of the rest of the clothes suffered similar fate. Worse, being in an unsealed bag and not knowing what was done with those items, I didn't know if they were safe to bring in my home, so the luggage went from the porch to the shed (where it remains today). I'm out about $500 in destroyed clothes (this doesn't include the original bill for the stay)<br />
<h5>
Customer Service in an Age Without Humans</h5>
It's been about 4 years since I've had to work with a hotel to get something like this resolved. My past experience was with a different chain, so it's possible this is related only to Sheraton, but my sense in dealing with other Customer Service related things is that customer service has suffered greatly in the age of text. My first instinct was to send a message via e-mail. Visiting Sheraton's Web site yields a contact form -- nothing I can attach photographic evidence that truly explains the severity of the situation to. I started there and replied to the auto-reply with 8 photos I had snapped (we took a video also). Both cases yielded an e-mail reply with a reference number (different for the sent and replied). Both assured that I would be contacted. Neither were ever replied to, so after two days I made my first call. I referenced the e-mail and was given a reference number and was told that the team who deals with this sort of problem is being engaged and I'd hear back from them soon. I also received an e-mail stating that I'd receive a reply "in five days" (not <strong>business days</strong> but I had figured this might be the case). Nine days later, I called back hoping this would get me to someone who could directly address the problem.<br />
I started off the call explaining my situation (which left the CSR speechless for a moment) and asked for a manager. I was passed off to "the manager" and explained what had happened. After a bit of hold time to review the problem (given a reference number I had to hunt down), I was told that it was being handled by Consumer Affairs. Ok, so can hand me off to whomever is handling it at Consumer Affairs? No. They gave me an e-mail to reply to. I explained that I'd now had six different messages ignored by them via e-mail (one directly to the hotel I stayed at) and that I would really like to talk to someone who's <strong>job it is to actually solve the problem</strong> (I was far more polite than my writing makes it sound like). I asked to speak to her supervisor.
<br />
<h5>
Supervisors In Name Only</h5>
It's long been a game in the CSR world that "Manager" and "Supervisor" really means nothing anymore. Most people get frustrated enough to ask for Supervisors and CSRs are already at a disadvantage having to field more calls than they have people to field them. If every call has to be passed to a Supervisor then they have to have a lot of supervisors. This last supervisor was the least helpful, explaining again that there is absolutely nothing she can do and the only option I have is to e-mail the magic, general, e-mail box for Consumer Affairs. No, thanks.
<br />
<h5>
Twitter</h5>
Finally, I took to twitter, posting 5 of the photos I had taken and @ing Starwood. I received a reply from SPGassist claiming I'd "gotten their attention" and to DM them my reference number. They failed to follow me, though, so I couldn't. I replied with my reference number immediately after they sent the tweet and asked them to follow so I could DM. I sent my phone number to their twitter account expecting, as had happened with other situations where I'd involved Social Media, to get a call that day. It's been 24 hours and *crickets*.
<br />
<h5>
Contrast with Another Property</h5>
I'd experienced this at another property four years ago. This time I discovered the problem when I arrived. I went to the front desk and asked for them to put me up somewhere else. Without even an argument, my request was granted. They, too, took my bags and sanitized them, returning them to me that evening as well as providing me with a new toothbrush and razor. The guy who delivered my luggage hung around while I went through everything to make sure it was all there and done to my satisfaction. They gave me 4 times the points for my stay. When I arrived home, the hotel manager at the first hotel <strong>called me</strong> to not only check in that everything was in order, but to assure me that they have completely stripped/sanitized the room I stayed in so that future guests would not have the problem I had. To this day they are among the two brands I am loyal to. Look, bad things can happen when you travel and Bed Bugs are one of the common "worst" things that can happen. But when it's handled well it's not going to keep me away -- if anything, I now know that "Another Property" brand not only takes the problem seriously, but should that problem happen to me again I know it'll be addressed in a professional, prompt manner. What I've learned from this experience from Starwood is that even in a <strong>serious manner such as Bed Bugs</strong>, I can expect to have to chase Customer Service around for a month and ultimately have to write, in vein, about it which tells me there's little-to-no hope should something less serious happen to me. No thanks. I'll be sending a few more e-mails elsewhere and one snail mail, certified letter, but I'm done with chasing general mailboxes and the non-existant people behind them.Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-51666344744884440712016-03-19T23:17:00.000-04:002016-03-19T23:19:29.790-04:00Fix: Serial TTY Terminal Output on Raspberry Pi 3 is Garbled, Garbage or Otherwise Broken<h5>Problem</h5>
You've prepped your Raspbian SD card. You've plugged in your FTDI / RS232 adapter to your shiny new Raspberry Pi 3, set up Putty for 115200, Parity 8, Flow Control None, Stop. You've wired everything correctly, Ground to Ground, Tx to Rx, Rx to Tx, and you plug-in, only to be greeted by garbage. Lots of garbage. Blocks, Corners, non-English characters. It looks like your modem looked when it blew off a BBS connection in 1992. But it was just <strong>dandy</strong> with your Raspberry Pi 2! What gives?
<h5>What's happening</h5>
Bear with me since some of this may be inaccurate. I'll be honest, the last time I dealt with troubleshooting a serial terminal connection was easily 18 years ago. This shouldn't have happened, really. In my teen years, I actually coded a FOSSIL COM driver into Bulletin Board System software I had written for my multi-node BBS (get off my lawn!). 18 years and many lines of code later, I'm feeling senile.<br />
So with serial communication, there's always the sensitive issue of <strong>timing</strong>. Your Pi is trying to send data at 115200bps. It's failing. Somewhere along the mystical way, the CPU clock rate is set in such a way that it doesn't quite deliver at the right speed and the result is the text vomit you're seeing in PuTTY.<br />
<h5>The Fix</h5>
The fix is simple, but has side-effects. If you're simply setting up the serial terminal in order to login and configure WiFi because you're too lazy to walk 10 feet to another room and plug in an Ethernet cable, it's a perfect solution. If you are interested in using serial communication for some other purpose, it's not.<br />
Simply plug the SD card into your PC. Open up the "config.txt" in the root in something that will understand the line endings properly (Notepad++, Atom, Visual Studio Code, Sublime, ... basically anything other than Notepad). Insert the line "core_freq=250". Save and safely eject the card. Pop it back into your Raspberry Pi 3 with everything plugged in and you should see the majestic, properly formatted, vomit-free boot sequence. Login and you're good!<br />
Once you're done and network connected, remove that line from the config.txt (located in /boot/config.txt if you're editing directly on your RPi) and reboot. You'll be back to the original, faster clock speed (and broken serial communication).Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-834974180814474682016-03-18T19:16:00.000-04:002016-03-18T19:16:41.614-04:00HowTo: Add yourself as a local administrator via DirectAccess only connected PC<h5>Problem</h5>
You've just been offline domain joined to your domain and you login with your account only to discover that you're a non-administrator on your laptop! This won't do, so you hop launch Computer Management using a local administrator account or Microsoft Account that's an administrator and try to add your Active Directory ID. You soon discover that though Computer Management thinks it can see the domain, it can't seem to find the account that you're actually logged into the computer with!
<h5>Why It Doesn't Work</h5>
Honestly, I'm not sure on this one. My hypothesis is that Computer Management launched as a local admin is not able to use the DA tunnel, but it knows you're in a domain and expects that it can get to it. This is backed up by the long (Not Responding) message as you wait for it to fail. Bummer.
<h5>The Fix</h5>
Use a tool that is so old that it can't possibly fail! Kidding. But it is old. Remember the "net" command?<br />
Launch a Command Prompt (cmd.exe) as a local administrator (or Microsoft Account with Local Administrator access).<br />
Type in:<br />
<pre>net localgroup administrators YOURDOMAIN\youraccount /ADD</pre><br />
I believe you're going to have to use your SamAccountName (old style DOMAIN\account) rather than UPN (account@ActiveDirectoryDomain.int), but the latter may work. I didn't try it so I simply don't know. :)Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-61304891863659522642016-02-18T13:40:00.001-05:002016-02-18T14:05:09.332-05:00Monitoring Local Presence Changes in Lync Client SDK<h5>Overview</h5> <p>I’m giving a talk remotely today to the DFW Unified Communications Users Group on Skype for Business development and will be showing some of the code related to my simple <a href="http://matthewdippel.blogspot.com/2016/02/making-network-connected-raspberry-pi_18.html">Raspberry Pi LED Status</a> tool. The code behind the project was thrown together in under an hour and <em>really </em>isn’t a great example of best practices, however, it’s a good place to get an idea about how to do some basic development with the Skype for Business Client.</p> <p>In this post, I’ll go through some of the <a href="https://github.com/Diagonactic/RPi-LyncStatus">code used for that project</a>. This post is based on the initial commit, so make sure you’re referencing that in case I’ve got the bug and decided to update it.</p> <h5>The Lync Client SDK</h5> <p>Yes, Lync. Unfortunately, it wasn’t updated for the Skype for Business release, however, it works fine with the latest client. You’ll need to be running the latest Lync 2013 client or Skype for Business client in order to use solutions developed with Lync Client SDK, however, you require no other components to actually <em>run </em>the program—it’s all included with the client.</p> <h5>References</h5> <p>Download the Lync Client SDK (search, I won’t provide a link due to Link Rot regularly wiping them out) and install it.</p> <p>You’ll need to reference Microsoft.Lync.Model.dll and Microsoft.Lync.Utilities.dll. They’ll be located in the Office15 folder…somewhere.</p> <h5>A Note about Multi-threaded Code</h5> <p>The Lync Client SDK provides a set of APIs for interacting with the client using multi-threaded code. This means that you’re going to have to be cautious with certain operations. Many things you’ll want to work with are fired on events in a background thread and all of the perils of multi-threaded programming will apply. You’ll see a few things I’ve done to combat this, but the code in the initial commit is by no means fully audited to ensure thread-safety and a <em>lot </em>can go wrong in this area!</p> <h5>Interacting With the Lync Client</h5> <p>In the constructor for the Monitor class, you’ll see everything you need in order to connect to and interact with the Lync client.</p><pre class="brush: c#">// Connect to the current Lync Client
m_client = LyncClient.GetClient();
var contact = m_client.Self.Contact;
if (contact == null) // There's better ways to do this, but this works in a dirty implementation
{
SimpleLogger.Log("Client is not logged in - Setting Offline", m_gpioController.AllOff());
throw new InvalidOperationException("Client must be logged in before starting the monitor");
}
// Create a subscription and subscribe to our own contact object
contact.ContactInformationChanged += ContactOnContactInformationChanged;
var contactSubscription = m_client.ContactManager.CreateSubscription();
contactSubscription.Contacts.Add(contact);
</pre>
<p>The connection occurs at LyncClient.GetClient(). This method will throw if the client is not launched. I chose not to catch that exception since it effectively renders the application dead.</p>
<p>From there, I subscribe to the ContactInformationChanged event and add a subscription to my local contact (m_client.Self.Contact). This ensures that when any property of my contact changes, the method “ContactOnContactInformationChanged” will fire (on a background thread).</p>
<h5>When Contact Information Changes</h5>
<p>I have a pretty simple event handler defined for that:</p><pre class="brush: c#">private void ContactOnContactInformationChanged(object sender, ContactInformationChangedEventArgs e)
{
if (e.ChangedContactInformation.Contains(ContactInformationType.Availability))
SetLedState();
}
</pre>The "e" property lets us know what specific modification caused the event to fire. It's common for more than one thing to change at a time, but since I only care about the Availability component, I check for it and fire off "SetLedState". The <em>changed</em> information is not included, just the component that changed, so we have to look that up. This is done via the following in SetLedState(); <pre class="brush: c#">var contact = m_client.Self.Contact;
if (contact == null)
return;
object availabilityId = contact.GetContactInformation(ContactInformationType.Availability);
var availability = (ContactAvailability) availabilityId;
</pre>In this application, I'm only subscribing to the local contact's presence so I simply grab that contact's Contact object. Availability is an int boxed in an object, not a ContactAvailability enum as one might expect, however it's simple to just cast that to ContactAvailability for easier decoding. From there, I "switch" on the ContactAvailability and set the LEDs using the GpioController class.
<h5>What’s up With All The Interlocked Stuff in GpioController</h5>
<p>Remember all of that multi-threaded nonsense I mentioned earlier? I implemented Blinking using a Timer, which is a class that lets you fire off a method on a background thread at a given interval. Because of this, and because our status changes come on an event handler that fires on a background thread, there’s a few members of our GpioController that could be modified by more than one thread.</p>
<p>Normally I’d use a lock, and that would be fine here, as well, but the requirements for this application were simply to ensure that the variable being read is the latest copy in memory. Lighter weight patterns work fine in this scenario and I use them enough that I simply go there when the kind of variable fits well.</p>
<p>In addition, the GpioController is disposable because that timer needs to be cleaned up. The Dispose pattern that’s commonly used is not thread-safe. I’ve included a class in the Patterns class that handles it in a thread-safe manner. There’s not going to be a case in the application, as it’s currently written, where the GpioController will be disposed on anything but the main UI thread, however, I anticipate that future changes will introduce this and I’d rather not fix that later. For the most part, you can ignore that class. If you’ve not done a lot with the Interlocked static methods, you’ll find it to be confusing.</p>
<p>The GpioController keeps a “current LED state” in a flags enum. Enums are effectively syntactical sugar over ints with constant fields. I developed a <a href="https://www.nuget.org/packages/DiagonacticEnumsExtensions/">NuGet package</a> that contains a number of helpers for flags enums and includes a wrapper to provide “safe” access to an enum from multiple threads, providing guarantees that when the “Value” member is set, any getters on another thread will always receive the latest value instead of what happens to be in the cache for the core your code is executing on.</p>
<p>The only other place that needed protection was around the blinking feature. To protect that, I used an int variable in place of a bool and Interlocked to ensure it’s updated and read properly. Let’s look at that more closely:</p><pre class="brush: c#">// Check that the light is actually blinking and set it to NOT blinking in a threadsafe manner
if (Interlocked.CompareExchange(ref m_isBlinking, 0, 1) == 1)
{
m_timer.Change(Timeout.Infinite, -1);
Thread.Sleep(m_currentBlinkInterval.Milliseconds * 2);
}
</pre>
<p>The Interlocked bit serves two purposes. First, it ensures that the value of m_isBlinking is both read and written to in a manner that guarantees the latest value will be retrieved. It also ensures that if two threads hit this code at nearly exactly the same time, only one will execute the statements in the if block, modifying our timer and incurring the penalty from sleep.</p>
<p>What the code is actually doing is saying "read what is <em>actually</em> stored in m_isBlinking" and if it's set to "1", set it to "0". Then, check the value that was <em>previously stored in m_isBlinking</em> and continue into the body of the "if" <em>only</em> if the previous value of m_isBlinking was "1".</p>
<h5></h5>
<h5>Summary</h5>
<p>The code is pretty thoroughly commented, so I’d encourage you to look at each of the methods in each of the classes for more information. You’ll see I’ve put an event handler in for detecting power events, which is a good idea when your application is going to run on a laptop/tablet. It’ll catch the “Suspend” event and let you do anything you need to do to clean up before suspending. Beware that this code should be <em>simple, </em>since the machine can hit suspend before your code is finished executing. I use it to turn the LEDs off and catch the Resume to ensure the application doesn’t crash when coming back up (the Lync Client API will fire events before you can actually get at the data contained within the Contact class, but it’s easy to detect and prevent while resuming).</p>Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-22544631763920741892016-02-18T12:53:00.001-05:002016-02-18T14:06:12.198-05:00Making a Network Connected Raspberry Pi Display Your Presence State on LEDs<h5>Overview</h5> <p>I work from home full time and at <a href="https://modalitysystems.com">Modality</a>, we use Skype for Business for all of our conference calls and other communications needs.</p> <p>Being a work from home employee with children, particularly one who doesn’t like to lock himself in his office, I wanted a way to let my family know when I cannot be interrupted. And being a geek with geek kids, what better way to solve the problem than with legos and a Raspberry Pi?</p> <h5><strong>Items Required</strong></h5> <p>A Raspberry Pi – I used a Model B (non-Plus). Any Raspberry Pi model should work provided you can get it on the network somehow. These cost about USD $35.00. This would likely work with a Banana Pro, as well (about $45.00) and using that would give you wireless out-of-the-box.</p> <p>A Network Connection – I purchased a RealTek WiFi+Bluetooth Adapter. Personally, I wouldn’t recommend this product since it required compiling a driver since the kernel didn’t have native support for it. Refer to the <a href="http://elinux.org/RPi_USB_Wi-Fi_Adapters">compatibility list</a> for the least effort in getting WiFi working. If you’re using the Banana Pro, you won’t need to worry about this.</p> <p>3 LEDs – Green, Yellow, Red if you want to keep with the colors that Skype for Business uses.</p> <p>3 270-330 ohm resistors – one for each LED.</p> <p>Wires and a Breadboard – or some other way to connect them to the GPIO pins.</p> <p>Legos to build an ugly case (optional).</p> <p>An SD card that works with the platform you’re using (I recommend sticking with 16GB to give you space to play around).</p> <h5>Setting up the Software</h5> <p>I won’t go into the full setup instructions for a Raspberry Pi, but you’ll want to get it up and running. I used the DietPi operating system, but this will work with Raspbian and I’d recommend using the NOOBS tool to install if you’ve not done an RPi install before.</p> <p>You’ll need WebIOPi in order to interact with the GPIO via the web. This came preinstalled with DietPi.</p> <h6>Update the Raspberry Pi OS</h6> <p>I did the installation with the latest bits as of the writing of this post and it’s always a good idea to be up-to-date. You’ll want to either connect via Secure Shell (PuTTY works well for this on Windows) or plug your RPi into a display and keyboard to get a shell. Once you’ve gotten a shell up, run these commands (and grab yourself a cup of coffee).</p> <h6></h6><pre class="brush: bash">$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo rpi-update
</pre>
<h6>Configure WebIOPi</h6>
<p>The GPIO pins aren’t configured the way we need them to be, yet, so we’ll edit the WebIOPi configuration to get that going.</p><pre class="brush: bash">$ sudo nano /etc/webiopi/config
</pre>
<p>Nano is a simple console text editor. Locate the heading labeled [GPIO] (hint: CTRL+W can be used to find text in Nano, however, GPIO should be at the top). Anything with a “#” in front of it is ignored (a comment). The configuration under GPIO is “PIN = DIRECTION STATE”. If you have 17, 18 and 27 already setup in some way, you’ll want to modify those lines. If they are not configured at all, you can simply add the following lines.</p><pre class="brush: bash">17 = OUT 0
18 = OUT 0
27 = OUT 0
</pre>
<p>Locate the [HTTP] heading and make sure it is setup as follows</p><pre class="brush: bash">[HTTP]
# HTTP Server configuration
enabled = true
port = 8000
# File containing sha256(base64("user:password"))
# Use webiopi-passwd command to generate it
passwd-file = /etc/webiopi/passwd
# Change login prompt message
prompt = "WebIOPi"
</pre>
<p>Locate the [REST] heading and make sure it is setup as follows</p><pre class="brush: bash">[REST]
# By default, REST API allows to GET/POST on all GPIOs
# Use gpio-export to limit GPIO available through REST API
gpio-export = 17, 18, 27
# Uncomment to forbid changing GPIO values
#gpio-post-value = false
# Uncomment to forbid changing GPIO functions
#gpio-post-function = false
# Uncomment to disable automatic device mapping
#device-mapping = false
</pre>
<p>The rest should be fine as is. Hit CTRL+X and choose “Yes” to save the file. Finally, we’ll setup a user/password that will let us authenticate.</p><pre class="brush: bash">sudo webiopi-passwd
</pre>Set up an ID and password of your choosing.
<h5>Wiring it up</h5>
<p>The wiring for the project is pretty simple. Here’s a diagram that explains how to set everything up. <strong><em>Make sure your resistors are wired correctly! </em></strong>I’m not an EE guy, but in my reading the warnings indicated that failing to install resistors or using the wrong resistors will result in damaging the Raspberry Pi.</p>
<p>It’s also important to note that LEDs are one-way devices. If you install them backward you won’t break anything, but they won’t light up (which is the first thing to check if your tests don’t work).</p>
<p><a href="https://lh3.googleusercontent.com/--dZi0cH4hSQ/VsYE-v4et-I/AAAAAAAAPcE/6ZYvW5klSH4/s1600-h/Diagram-Cropped%25255B3%25255D.png"><img title="Diagram-Cropped" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="Diagram-Cropped" src="https://lh3.googleusercontent.com/-sM5HOqcM59A/VsYE_NJI5gI/AAAAAAAAPcI/r2QBNcoZJQQ/Diagram-Cropped_thumb%25255B1%25255D.png?imgmax=800" width="558" height="503"></a></p>
<p>That’s the GPIO arrangement of a B (non plus version).</p>
<p>That’s it, you’re ready for the code.</p>
<h5>On Your Windows Box</h5>
<p>The code is client-side, so it won’t work if your PC is turned off or asleep, but it’s the easiest way to do it and doesn’t require administrator privileges on the Skype for Business server as would be if we’d done this with the Unified Communications Managed API (though it can be done with a bit more work that way).</p>
<p>You can get the source code from <a href="https://github.com/Diagonactic/RPi-LyncStatus">the GitHub Repo</a>. I’ll provide a compiled version, but because the software is <em>very </em>hastily designed, you’ll need to build it yourself. You can download Visual Studio 2015 Community Edition to compile the project.</p>
<h5>Running the Application</h5>
<p>Hit Win+R and type CMD. Go to the folder that the application compiled to (usually ProjectName\bin\Debug) and run:</p><pre class="brush: bash">Cs-WebIoPi <IPADDRESS> <PORT> <USERID> <PASSWORD> --test
</pre>
<p>If you’ve configured it following this gude, you’ll put “8000” for the port. When started in “Test” mode, it’ll blink all of the LEDs and cycle through each one so that you can verify your wiring. Fix any wiring issues and when you’re all set, run it with the same command above but without the “test”.</p>Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-42387117850838406212016-02-06T17:23:00.000-05:002016-09-26T18:34:59.374-04:00HOWTO: Build the RTL8723BU for Raspbian (Raspberry Pi Model B)<p><b>UPDATE: <a href="https://www.kickstarter.com/projects/1728237598/hubpiwi-blue-pi-zero-add-on-wifi-bluetooth-3-usb-p">The HubPiWi Blue</a> includes this chipset and the instructions for that also work with the USB dongle. I've provided an updated set of instructions <a href="http://matthewdippel.blogspot.com/2016/09/howto-getting-hubpiwi-blue-drivers.html">here</a> that are a bit simpler and are targetted at Raspbian Jessie. You should use those, instead.</b></p>
<p>I recently started getting my second RPi setup, only on this one I wanted to use a dual WiFi/Bluetooth USB dongle. I picked a device based on the Realtek RTL8723BU after reading a few posts indicating success. Unfortunately, it doesn’t work “out-of-the-box” with Raspbian Wheezy on Kernel 4.1.13.</p> <h5></h5> <h5>Before we Proceed</h5> <p>I’ve been tinkering with RPi’s for a little bit, but I’m by no means an expert. If there are obvious problems with the steps I’m doing here, kick me a comment and I’ll correct them. This worked for me. It might not work for you. I’m assuming you’re on 4.1.13+ installed via NOOBS and you haven’t added/changed much up to this point. If you have, some of the steps (like removing the build/src folder) might already have things in them that you don’t want to lose, so exercise some discretion before running each command. And, of course, this comes with NO WARRANTY expressed or implied – use at your own risk, my friend.</p> <h5>What You’ll Need</h5> <p>Besides the hardware, you’ll need a wired or working wireless connection (using a device other than the one we’re building a module for).</p> <p>If you’ve done the Raspbian install from NOOBS, you should have all of the necessary packages required to build the driver. If you haven’t, you might need to apt-get build-essential and others that I’m not entirely sure of.</p> <h5>Overview</h5> <p>We’ll update the system, clone the git repo for the driver, download the kernel source, set up the build environment, and build the driver.</p> <p>I’ll also provide the steps I took to get the miserable thing to actually<em> work!</em></p> <h5></h5> <h5>Let’s Get To It</h5> <p>To get this going, you’ll need the kernel source and it’s probably a good idea to make sure you’re all up to date.</p>
<pre class="brush: bash">
apt-get update
apt-get upgrade dist-upgrade
reboot</pre>
<p>You can go run some errands during the dist-upgrade, it took over an hour for me.</p> <p>To start, get the latest drivers</p>
<pre class="brush: bash">
cd ~/
git clone https://github.com/lwfinger/rtl8723bu.git
</pre>
<p>If you’ve installed from NOOBS (as I did), at this point, you’re not going to be able to make the driver. You need the kernel headers for the kernel you’re running. Normally, apt-get install linux-headers or something along those lines would do the trick. Not this time.</p> <p>Unfortunately, when I wrote this, the linux headers for the version of the kernel used by Raspbian, 4.1.13+, were not available from aptitude. After some searching, I found a downloadable debian package that works.</p>
<pre class="brush: bash">
wget https://www.niksula.hut.fi/~mhiienka/Rpi/linux-headers-rpi/linux-headers-4.1.13%2B_4.1.13%2B-2_armhf.deb
sudo dpkg –i ./linux-headers-4.1.13+-2_armhf.deb
</pre>
<p>If you lack one or more of the dependencies, install them with apt-get install <dependency>, then apt-get –f install which will finish up the installation of the headers.</p> <h5>Modifying the Makefile for Raspbian</h5> <p>We’ll need to tweak the Makefile for the driver to build properly. I’m not sure that all of this needs to be added – I grabbed it from several google searches. But, hey, it worked, so here’s what I changed.</p> <p>nano Makefile</p> <p>Hit CTRL+W and type “CONFIG_PLATFORM_I386_PC =” and hit Enter. Set it to “n”. Add a line below it and type “CONFIG_RASPBIAN = y”.</p> <p>Hit CTRL+W and type “ifeq ($(CONFIG_BT_COEXIST)” and hit Enter. Insert the following text above that line.</p>
<pre class="brush: bash">
ifeq ($(CONFIG_RASPBIAN), y)
EXTRA_CFLAGS += -DCONFIG_LITTLE_ENDIAN
EXTRA_CFLAGS += -DCONFIG_IOCTL_CFG80211
EXTRA_CFLAGS += -DRTW_USE_CFG80211_STA_EVENT # only enable when kernel >= 3.2
EXTRA_CFLAGS += -DCONFIG_P2P_IPS
ARCH := arm
CROSS_COMPILE := arm-linux-gnueabihf-
KVER := $(shell uname -r)
KSRC ?= /lib/modules/$(KVER)/build
MODULE_NAME := 8723bu
MODDESTDIR := /lib/modules/$(KVER)/kernel/drivers/net/wireless/
endif
</pre>
<p>Hit CTRL+X then “Y” to save the file.</p> <h5></h5> <h5>Build It</h5><br>
<pre class="brush: bash">
sudo –i
cd /home/pi/rtl8723bu
make && make install
</pre><p>That part took almost an hour on my RPi Model B.</p> <h5></h5> <h5>Making It Work</h5> <p>Well, at this point, you can just reboot. But if you want to test prior to rebooting, run the following:</p>
<pre class="brush: bash">
insmod 8723bu.ko
ifconfig
</pre>
<p>You should now see “wlan0” among the network adapters!</p>Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-28437479134853022612016-02-04T22:11:00.001-05:002016-02-04T22:13:21.501-05:00HowTo: Keep the factory image from booting after you’ve upgraded to DD-WRT<p>So you’ve upgraded your WRT1900AC router (or other similarly designed router) to DD-WRT and it’s humming along fine, or maybe you’ve done the upgrade and then followed it up with a 30-30-30 reset and <em>suddenly </em>you’re back to the original Linksys Firmware! Gah, lousy downgrading $#!+!</p> <h5>What’s happening</h5> <p>Linksys routers used to be a popular choice for running DD-WRT back in the days when the vendors were far less happy with you replacing their buggy, insecure firmware with something open and customizable. Rather than taking the approach of keeping you from replacing their firmware, lately they’ve gone the route of keeping you from <em>breaking </em>your device should you choose to hack around with DD-WRT.</p> <p>They used to do this by providing a “failsafe” mode that provided an extremely limited, but “possible to fix” mode that would leave you carefully timing a TFTP upload during the first two seconds of boot (this still exists, I believe). Many-a-panicked afternoons were spent in this home praying my $150 router didn’t just turn into a doorstop because of this. They’ve gotten <em>way </em>better since. Your flash memory is split into two partitions, only one of which has the currently running firmare. The other has the last version of the firmware that you successfully installed. When you upgrade, it replaces the unused partition and changes an NVRAM value to tell the bootloader to use the other partition for subsequent boots.</p> <p>Hopefully no more TFTP adventures, but it comes at a small cost. Should something <em>bad enough</em> happen, or should you do a “factory reset”, you’re likely going to end up booting the other partition – the one you were <em>trying </em>to get away from.</p> <h5>Making the install a little more Undo Proof</h5> <p>If you’ve followed the installation guides to the letter, you probably did what I did. Upgraded, reboted and then did a 30-30-30 factory reset. I’m not sure if it’s the timing of *when* you do the 30-30-30 or just doing the 30-30-30 that causes it, but the boot partition ends up flipping right back to the original firmware.</p> <p>I think I ended up doing the 30-30-30 during the boot-up process into the new firmware which, I’m guessing, caused the boot to fail and reverted me back to the last version. I’m honestly not sure, but after that happened, I simply skipped the 30-30-30 until I finished the next step.</p> <h5></h5> <h5></h5> <h5>Install DD-WRT … Twice</h5> <p>Yup. Simply installing one SVN version out of date, then the latest SVN version (you might even be able to install the latest on both, but I didn’t try it), ensured that I had both partitions on DD-WRT. After that, I did a factory reset through the web interface (which didn’t take me to the backup partition, incidentally) and I was ready to go with a fresh router that can’t fallback to the factory image.</p> <p>This doesn’t solve everything, unfortunately, because there are aparently other reasons it may decide to toss you into the backup partition. In my case, it <em>looks</em> like a failed boot might have caused it to revert the second and third times. But once you have DD-WRT installed, you can drop to SSH and run nvram get bootpartition, and simply set it to <em>the other one </em>via nvram set bootpartition. That’s a heck of a lot easier than redoing the upgrade during the <em>rare </em>times this might happen.</p>Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-27182513514204726132015-10-13T09:29:00.000-04:002016-08-11T12:54:06.133-04:00Quick Fix: You can't debug the Silverlight application because the Silverlight Developer Runtime is missing<h5>
Problem</h5>
You've fired up the shiny new Visual Studio 2015 (or an older version) to debug a Silverlight 5 application and you get an error: <strong>"You need to install the latest Silverlight developer run-time before opening Silverlight project"</strong>. And in matters requiring a blog post, the go.microsoft.com link provided doesn't go anywhere.
<br />
<h5>
Solution</h5>
Microsoft, intelligently, gave up on Silverlight but there are still many projects that require it. My day job is Lync development and I needed this to troubleshoot a Lync CWE. All variations of the run-time are still available, but due to a security issue, they were moved to an obscure place... a security bulletin!<br />
What you want is <a href="https://www.microsoft.com/en-us/download/details.aspx?id=52977">KB3162593</a> (updated to June 2016 patch), select Silverlight_Developer after clicking Download. Click the install link and you'll see Silverlight_Developer_Runtime.exe as an option. Be sure to uninstall Silverlight first, or you'll get an error. The 64-bit variant is there as well, along with the Version 5.0 SDK and others.<br />
Woo Hoo! Time to party like it's 2011!<br />
And in case you're looking for the Toolkit, it's still at <a href="https://silverlight.codeplex.com/releases/view/78435">its old CodePlex link</a>.Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-89878730796714916002015-09-13T14:33:00.001-04:002016-10-04T10:04:55.249-04:00HOWTO: Fix a GE XL44 Gas Oven that won't pre-heat (or takes forever to pre-heat)<h6>Disclaimer</h6>
You're messing with a gas oven. Natural gas is dangerous. Electricity is dangerous. You could break your oven. You could burn or kill yourself. If you're not comfortable with fooling around with these sorts of things, call a professional. I'm not a professional, I'm a homeowner who likes to tinker. This information is provided AS IS with no warranty express or implied. If you use any of this information to repair your stuff, YOU are responsible for the outcome and agree not to hold me responsible for it...even if the information I've provided here is dead wrong.
<h6>Repair Difficulty</h6>
Holy cow was this easy. Including taking apart, pulling the oven out of its spot, installing and putting it all back it took me 15 minutes to complete.
<h6>Symptoms</h6>
The problem can present itself as several different symptoms. In my case, the oven would take a long time to light and once it lit, it would shut off within a few seconds. I've had problems with this unit before and the symptom was that it wouldn't light at all. Because it was firing up but then shutting off, I had thought it might be the valve that was at fault, but after a lot of reading on the interwebs, I went after the igniter. When I tested the oven igniter, I noticed that it didn't glow nearly as yellow as the broiler igniter; that's a sure-fire sign of a failed igniter. It's a cheap part, available here: <a href="http://amzn.to/2dPgF5T">GE WB13K21 Igniter for Oven</a>, and if it turns out that wasn't the bad part, rest assured, it'll fail at some point and it'll be good to have a replacement around. Be sure to search the exact model number of your oven (it's on a sticker that's visible when you open your warming drawer or oven door). Mine used the square igniter. Some use the round one.
<h6>What's wrong?</h6>
One word: Igniter. The igniter on these units has a lifetime, sometimes short, sometimes a few years. It depends on how often you use it, but it will fail. Most of the other parts on the oven will last a very long time. This one wont. Luckily, it's easy to replace.
<h6>Why is it doing this</h6>
Again, I'm not a professional, but I understand a little bit about how these things work. The process for starting your gas oven is pretty simple. Electricity runs through your igniter making it glow. In a working igniter, it will glow yellow/white and become *very* hot. When it reaches the right level of "hot", the valve that controls the flow of gas into the burner opens up. If for some reason the igniter isn't getting hot enough, the valve will not open. In my case, the igniter just beginning to fail. It was getting plenty hot (well past the point of being able to light the oven), but electrically it wasn't getting hot enough for the valve to keep gas flowing into the burner. As a result, within a second or so the valve would shut and the flame would die (and a small stench of natural gas would flow into the room).
<h6>Tools Needed</h6>
A 1/4" socket wrench or a screw driver that uses magnetic replaceable bits (common in "multi-screwdriver" tool sets -- they just happen to be 1/4").<br />
A flat-head screw driver and a Phillips head screw driver (or appropriate bits for your magnetic replaceable head screwdriver).
<h6>Steps</h6>
Unplug the oven from power.<br />
Shut the gas off leading to the oven.<br />
Take the cover off of the bottom of the oven. This is done by removing the two screws in the back, pushing the bottom cover toward the back and tilting it up and out.<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrV1xgz88U4MRfzdjJM3Llk6A05-yjbmYD_rIpO-XIw9oLg2-ZgeUdSTqnoIWiPDfwPToc9fBBDvJDN6nfC5ph5odcNyeQzpPlaDpzqo51urPRpwxZfqY0o_uMMyCHqmXaiTnoJWTs5IKA/s1600/IMG_20150912_132659.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrV1xgz88U4MRfzdjJM3Llk6A05-yjbmYD_rIpO-XIw9oLg2-ZgeUdSTqnoIWiPDfwPToc9fBBDvJDN6nfC5ph5odcNyeQzpPlaDpzqo51urPRpwxZfqY0o_uMMyCHqmXaiTnoJWTs5IKA/s320/IMG_20150912_132659.jpg"></a></div><br />
Take the warming drawer off by pulling it all the way out and then pushing up on the little plastic tab sticking out of the left side of the drawer arm and down on the right little plastic tab on the right.<br />
You'll see the following in the back of your oven.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3dq6iKy6213DrEueUGZd-nux_6mru4hyZTX0NgesNBYtc9_vV6gITam3OyxafLr9_VpK3hZrB0E_HZWR1SZqeH0lEAweSgtZ2paClzH6NirX3b19E_yF8pr6v2SyTwijTSkBgNqHOo96Y/s1600/IMG_20150912_132640.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3dq6iKy6213DrEueUGZd-nux_6mru4hyZTX0NgesNBYtc9_vV6gITam3OyxafLr9_VpK3hZrB0E_HZWR1SZqeH0lEAweSgtZ2paClzH6NirX3b19E_yF8pr6v2SyTwijTSkBgNqHOo96Y/s320/IMG_20150912_132640.jpg"></a></div>
Unplug the plastic cable connection and pull the cable up from behind the igniter and burner in the back.<br />
Remove the two screws securing the igniter.<br />
OPTIONALLY - If you haven't purchased your part yet and just want to get the oven Bake feature working (we rarely use the broiler so we were more OK with that being out), you can swap the two igniters if they are identical in your XL44 (there are *many* models of XL44 on the market, so check them). If you decide to do this, remove the small metal panel from the back to get to the connection for the broiler igniter and remove the two screws securing it (Phillips head).<br />
Install the new igniter. If you purchased one with a connection attached, just plug it back into where the other was.<br />
Install the new igniter the same way the old one was installed, plug in the oven, re-open the gas valve and start a pre-heat. It should glow yellow:<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeYHupYBqqC5mHyDCKaBvG5PoGGEziWCnYs3cG_ktSWM0VeY-ZWJJN7LSLmlluqcAlugIc26nM4-7Kr1CY6xdJXBfrL1SYmBHeP_BPIZrnhdVXgEBYeiYvUDkX07Ys-liqnTG3PD0jgqqL/s1600/IMG_20150912_133212.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeYHupYBqqC5mHyDCKaBvG5PoGGEziWCnYs3cG_ktSWM0VeY-ZWJJN7LSLmlluqcAlugIc26nM4-7Kr1CY6xdJXBfrL1SYmBHeP_BPIZrnhdVXgEBYeiYvUDkX07Ys-liqnTG3PD0jgqqL/s320/IMG_20150912_133212.jpg"></a></div>
Turn the oven off, let it cool, put it all back together and you're good to go!<br />
Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-81533400154128928442015-08-31T22:07:00.000-04:002015-08-31T22:07:00.595-04:00HOWTO: Get the conditional compile symbols defined in the project for a source file (ITextView) in a Visual Studio ExtensionOne of the things I ran into when parsing with Roslyn in Stay Frosty was that #if / #endif blocks were treated as strictly in the parser as they are in Visual Studio (well, that should be obvious, it's the parser used by VS, isn't it?). That meant code that was riddled with preprocessor rules wasn't being parsed properly because it didn't know what I had defined.<br />
Fortunately the Visual Studio SDK provides a way at these values. Unfortunately, it's not as straight forward as I would have liked.
<h5>Conditionals given an ITextView</h5>
Here's a quick extension method I whipped up to pull the "defines" out of the project file given an ITextView/IWpfTextView.
<pre class="brush: csharp">
private static IEnumerable<string> Conditionals([NotNull] this ITextView textView)
{
if (textView == null) throw new ArgumentNullException(nameof(textView));
// Get the text document from the text buffer (if it has one)
ITextDocument textDocument;
if (textView.TextBuffer.Properties.TryGetProperty(typeof (ITextDocument), out textDocument))
yield break;
var componentModel = ServiceProvider.GlobalProvider.GetService(typeof(SComponentModel)) as IComponentModel;
// Get the VsServiceProvider
var vsServiceProvider = componentModel?.DefaultExportProvider?.GetExportedValue<SVsServiceProvider>();
if (vsServiceProvider == null) yield break;
// Get the DTE ...
var dte = (DTE) vsServiceProvider.GetService(typeof (DTE));
ProjectItem projectItem = dte.Solution.FindProjectItem(textDocument.FilePath);
Configuration activeConfiguration = projectItem?.ContainingProject?.ConfigurationManager?.ActiveConfiguration;
var defineConstants = activeConfiguration?.Properties?.Item("DefineConstants")?.Value as string;
if (defineConstants == null)
yield break;
// DefineConstants entries are listed semicolon and comma delimiated, so we need to split on both.
string[] constantSplit = defineConstants.Split(new[] { ';', ',' }, StringSplitOptions.RemoveEmptyEntries);
foreach (var item in constantSplit)
yield return item.Trim(); // They can contain whitespace on either end, so we'll strip 'em.
}
</pre>
Of course, you could get some of those services via an Export and eliminate the GetService / GetExported value calls, but I thought I'd include them since I don't know what you've decided to get from MEF.<br />
Have a lot of fun!Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-23015671626475663442015-08-30T19:22:00.000-04:002015-09-05T16:52:31.665-04:00Stay Frosty Visual Studio Extension - Method Block Highlighter, Text Clarity, Background Images and a lot more...<b>Updated 9/4/2015 9:00 PM</b> Published to the Extension Gallery (links at the bottom). Thanks for the feedback!<br />
For the most updated information, visit the <a href="http://matthewdippel.blogspot.com/p/stay-frosty-method-and-constructor.html">release page</a><br /><br />
It's hard to believe but I've been working on this extension for almost a year. That's what happens when you only have a few hours every weekend and the occasional evening to work on a personal project. It started out with a simple desire to add a chiseled effect to the text displayed in Visual Studio. I used to get horrible migraine headaches and that little effect made text visibility at low contrast much better without having to change my syntax highlighting rules. A little later I decided I hated being tied to my multi-monitor setup just to be productive when writing software. So I started working exclusively from my laptop screen, a nice 1920x1080, but still not as nice as having a few portrait 1080P displays and a primary 2K, but with a little bit of a change in how I worked (mostly just getting used to some features I never had needed with unlimited real estate), I was easily as productive on my laptop screen. The advantages of changing scenes means I can work when I feel an inspiration; no more having to tuck myself into a room.
<h5>Features</h5>
Most of the features of Frosty were designed with two purposes in mind: (1) Make working with code in limited screen real estate easier and (2) Make my environment prettier. Both are quite subjective and I don't expect everyone to agree with my decisions ... in fact, I wrote this for me, so if I'm the only user, I'll still be happy!
<h6>Method Blocks</h6>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiavizUvL_NNalGSCHhHyXWQVBrcuYkHz9Dj5AGtu0j-2oGDJ2g0f4NtVNPMyckKNOSj2ebgq1F6zBEh5b6wv0lmKu3qUKzF23BemgSawNAnYGmG86vxQx9zDe7qyWGnAuHdXWRDLHfxAYK/s1600/MethodBlocks.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiavizUvL_NNalGSCHhHyXWQVBrcuYkHz9Dj5AGtu0j-2oGDJ2g0f4NtVNPMyckKNOSj2ebgq1F6zBEh5b6wv0lmKu3qUKzF23BemgSawNAnYGmG86vxQx9zDe7qyWGnAuHdXWRDLHfxAYK/s1600/MethodBlocks.png" /></a></div>
Methods are highlighted with configurable colors, Static, Instance and Constructors. I decided against making those colors configurable in Fonts and Colors because I wanted to be able to control alpha on each, so I've done it via its own Options panel with a color picker. Sure, methods should be small enough to easily visually parse the beginning and end, but we're not always given the privilege of modifying code we've written. By default, constructors are bright, instance methods are dim and static methods are somewhere in between. I'm always looking to get back to a constructor, so I made it stand out a bit. In a God Object or a method that does too much (that, of course, *I* didn't write! :o) ...), finding important bits is easy.
<h6>Method Signatures</h6>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYXLKIY8AoOZOkM0yLrK4HcaRpVAHAO80TfaD5V5hIMhwWyX3q32eSu9LbBaXTlRsm73p8Rbh0nHGNPFwFMDgoCu339n3XGSHT1E2a46M1GKwPBTFI79CGdux2fn_OJpRGDKflPC7IVjhN/s1600/MethodSignature.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYXLKIY8AoOZOkM0yLrK4HcaRpVAHAO80TfaD5V5hIMhwWyX3q32eSu9LbBaXTlRsm73p8Rbh0nHGNPFwFMDgoCu339n3XGSHT1E2a46M1GKwPBTFI79CGdux2fn_OJpRGDKflPC7IVjhN/s1600/MethodSignature.png" /></a></div>
When the method signature scrolls off the top of the display, it shows up on the left hand side next to the box around the method. Sure, that's available from the drop-down at the top, but my caret isn't always in the method I want to know about. Now it's right in front of me.<br />
<h6>Text Rendering</h6>
Text rendering in the editor window is also configurable. Much like the great extension, TextSharp, except I only apply the effect to the text editor window (no reason for this other than that I didn't need the rest). You can enable ClearType or kill it, along with other things WPF allows you to tweak that Visual Studio doesn't give you direct control over.
<h6>Chiseled Text Effect</h6>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgUcxuXB6gW98MZbtSdIDo2Oq_YfaWScaG6BnZubJUpTBr4rX0pOlg-AFJUYnmzpsFNK0EgtEY1JQsGuOHMw902RfayNlYu3FyaWyMPsDLyx2g-TcbdsqPqkZH5xmdadUXmjlSYzR6GmMV/s1600/ChiseledText.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgUcxuXB6gW98MZbtSdIDo2Oq_YfaWScaG6BnZubJUpTBr4rX0pOlg-AFJUYnmzpsFNK0EgtEY1JQsGuOHMw902RfayNlYu3FyaWyMPsDLyx2g-TcbdsqPqkZH5xmdadUXmjlSYzR6GmMV/s1600/ChiseledText.png" /></a></div>
It's off by default. This was the original feature that I wanted, but getting it working turned into a several month long adventure with HLSL. Unfortunately, it doesn't render properly in Visual Studio 2015 without Hardware Rendering enabled. The effect is touchy. Colors need to have a bit of grey in them to render and different colors render slightly higher or lower on the baseline. It's a bug I intend to fix, but for now consider it quite "alpha" in nature. That's life!
<h5>Configuring</h5>
Go to "Stay Frosty" in "Tools|Options". You'll find it *highly* configurable. I like ultimate control over my environment so I tried to leave nothing out as far as customization. Maybe you don't like the features I've implemented? Turn 'em off or change them! Sure, <a href="http://gettingreal.37signals.com/ch04_Make_Opinionated_Software.php">software should be opinionated</a>,... except when its users are developers.
<h5>License</h5>
I'm not ready to release the source code quite yet, but I will be. The extension is licensed under the Apache 2.0 license. I had to learn quite a bit about Visual Studio Extension development over the last few months and I'm hoping that my experiences with it will help others, so I'm separating out parts into their own projects that can be used by others as utility libraries. Right now, time is keeping me from completing that part, plus I'd like to get a little more feedback from testing in the wild before I throw that out there.
<h5>Compatibility</h5>
At the moment Frosty supports Visual Studio 2013 and Visual Studio 2015. I'm using the awesome <a href="https://github.com/icsharpcode/NRefactory">NRefactory</a> library for code parsing in Visual Studio 2013. The 2015 version, of course, uses Roslyn and as a result performs a bit better than the 2013 version. Since I started with 2013, I didn't want to abandon that work and all of the folks who are stuck on the old version.
<h5>Caveats - It's Beta</h5>
It's beta. If you run into difficulties, please let me know at matthew dot dippel at google's public e-mail service. I'd love to get it right and working. If you want to help out, I'll have the code out on GitHub soon!<br />
There's (at least) one bug. The signatures don't always disappear from the left hand side when the top of the method becomes visible again when scrolling up. And it does use some additional resources when parsing code on Visual Studio 2013 -- it shouldn't get in the way too much on a decent machine (it performs well on my Core i5)<br />
<h5>Updates from Initial Feedback</h5>
If you use ReSharper (I do!), you can enable their syntax highlighting rules (disabled by default, enable them in options).<br />
You can now enable the Method Signatures to display regardless of whether or not the method signature is scrolled off screen (disabled by default).<br />
Abstract methods are no longer bordered.<br />
Two libraries were removed in favor of the PresentationFramework equivalents.<br />
The error that some were seeing when visiting the options page might be fixed. (I'm not seeing this on my machines)<br />
If ReSharper or the User Classes/Enums/etc Fonts and Colors option is missing, we'll use Identifiers instead of Plain Text.<br />
Fixed an exception that occasionally happened on file load where the width of the adornment would be calculated to a negative number.<br />
Fixed the caching of method signatures so they wouldn't have to be re-created every time they were drawn.<br />
<br /><a href="https://visualstudiogallery.msdn.microsoft.com/91cb9cc4-13a3-41fe-a3fe-545786a0ceab">Download the Visual Studio 2015 Edition</a><br />
<a href="https://visualstudiogallery.msdn.microsoft.com/692f93e4-7dd2-4e28-9ca8-2ddff92e0aab">Download the Visual Studio 2013 Edition</a>Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-40820022913079918252015-07-22T23:23:00.003-04:002015-07-22T23:24:10.663-04:00Quick Fix: Could not load file or assembly when instantiating XAML component (or IoC component) in a Visual Studio Extension Project<h5>Symptom</h5>
You're writing a Visual Studio extension that involves some XAML or IoC code that references a DLL file. When you attempt to instantiate the control, the debugger pops up with "Could not load file or assembly". You've checked the Fusion log and notice it isn't looking in the folder that contains the extension!
<h5>Fix</h5>
Add the attribute [ProvideBindingPath] to the class that represents the package (the class that implements the Package base class which is often completely empty).
<h5>Why?!</h5>
Visual Studio's Extension engine usually figures out where your reference .dll's are and binds them easily, however, when a .dll is referenced in XAML or IoC, it isn't explicit enough for Visual Studio to handle proper binding. The ProvideBindingPath attribute tells the extension engine to look in the extension folder when attempting to resolve dependencies. In my case it was a fancy options page that used XAML for its UI rather than the automatically generated UI.Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-74570044670584446352015-06-27T20:13:00.000-04:002015-06-29T10:53:35.199-04:00Stay Frosty Extension Preview - A WPF ShaderEffect for chiseled or beveled text<h5>Stay Frosty</h5>
There's always been a retro movement in the programming world. Every time a new tool comes out that reduces the requirement to memorize commands or the complete API of a language, there's many among us who wish we had a green terminal and punch cards. The elite look down upon us folks who love Visual Studio, spouting off about the advantages of vi/vim, emacs and other ancient, though powerful, tools. I came from that era. Screw it. I like my Intellisense and GUI based IDEs. I like defaults that build (most of the time). I like using a keyboard shortcut to fire the whole thing off in a debugger and I enjoy a gargantuan 32-bit text editor.<br />
I use many text editors for lighter-weight work. Sublime is awesome, as is the atom.io editor I'm starting to fall in love with. But my primary language for work and play is C# and Visual Studio has the tooling I need for that work. And if you're going to use a GUI interface for text editing, it might as well be really, really Gooey!<br />
For the last 4 years or so, I've had an extension I wrote for myself to add a simple background graphic to the text editor window. I wrote it for Visual Studio 2010, originally, and since that time there have been many others that do the same thing. I decided late last year to take it a step further.
<h5>What, precisely, are you trying to fix?</h5>
I used to get Migraine headaches regularly. I had tried triptans (and found out that my body does not metabolize them correctly) and many other medicines. None worked. Thankfully that's in the past, now, but prior to last year I had to figure out how to live my life and do my job while dealing with a very intense headache. I've never let Migraine keep me from living my life. Unfortunately, displays of any kind seem to trigger the worst sensitivities. My only trick was to work in a dim room with the Brightness setting on my display turned down. Unfortunately, many colors become invisible as the brightness drops. My options were limited, so I used the <a href="https://visualstudiogallery.msdn.microsoft.com/20cd93a2-c435-4d00-a797-499f16402378">Visual Studio Color Theme Editor</a> extension to keep two themes, one titled Migraine and one titled Normal. The Migraine theme had most of the syntax rules set to the same color, and all of the text was very, very bright. It was not ideal for using day-to-day but it worked when I needed it.<br />
Sometime around 2011 I purchased an 27" iMac. At the time it was the least expensive IPS display at its resolution and it had the added benefit of having a Mini-Display Port plug on the back that would allow you to plug in an external PC. I used it as my monitor and rarely logged into OS X. I did, of course, download XCode and all of the Apple development environment tooling. One morning I woke up with the familiar headache symptoms and began work in my home office. The Windows box had shut down and I was left on my Apple screen, so I powered up the PC and logged into my Mac to waste away the boot time on the web. The last time I had logged in I had left XCode (I think that's what it was) up. I noticed immediately that at the lowest display setting, all of the text was readable. The principal reason for this was a very subtle text chiseling applied to the code editor window. This got me <strong>*very*</strong> excited -- I could code using my normal syntax highlighting and still <strong>read the text at low brightness</strong>! A quick look around though spoiled my excitement. There was no ShaderEffect to be found that could apply this chisel. I'd have to write my own. And I'd have to write it in a language that I was unfamiliar with. Having too much to do already, I put it down until late last year.
<h5>A Mac-like Chisel Effect</h5>
The brilliant folks in Cupertino did something rather simple. The effect was no more than a subtle glow on the bottom and shadow on the top. Surely this couldn't be difficult to reproduce. It turns out, in the end, it wasn't all that difficult. I am not a game developer. I've never written a line of shader code. While I am plenty familiar with C and comfortable with the math, I found myself in hell without a debugger. Something so straight-forward was complicated by subtle requirements WPF imposes on ShaderEffects. Multipass? Nope. Alpha that works like the rest of the libraries I was familiar with? Nope. It's <a href="http://blogs.msdn.com/b/shawnhar/archive/2009/11/06/premultiplied-alpha.aspx">pre-multiplied</a>. I was amazed at the body of knowledge I had to immerse myself in to write a simple ShaderEffect for 2-dimensional text. I wanted, after all, a pixel above and a pixel below the boundaries of the font. I was up against ClearType, DirectX and the fact that in the realm of 3D the concept of a pixel is absent, and of course, GPU parallelism. Fun.
<h5>The Stay Frosty Extension</h5>
The title is a bit of a joke. I got a chuckle out of Scott Hanselman's comment in the first paragraph of his <a href="http://www.hanselman.com/blog/VisualStudioProgrammerThemesGallery.aspx">Visual Studio Programmer Themes</a> blog post and thought I'd swipe it.<br />
The main purpose of the extension is to improve code text readability. WPF has long had a lot of complainers about how it render text. In the version used by Visual Studio 2013, you can tweak the text with several rules around font smoothing, hinting and rendering as well as requiring that the output snaps to the pixels of the device that's displaying it. I've exposed these as settings. You can apply rendering, font smoothing and hinting rules as well as require text to snap to device pixels. I've tweaked the window to allow the placement of a background image and an override to the background color without requiring the aforementioned Theme Editor extension installed (and it'll override whatever theme you've put there, as well). In addition, you can (optionally) apply the ChiseledText ShaderEffect and control the settings around it (described below). I plan to add many, many more capabilities to it and it will be released under the Apache 2.0 license as well, however that's my planned first release.<br />
Here's how it looks in my environment fully customized:
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhm8QwsJHzrKSrZTGYQdnBM2gX96O-Mm2Ee3F5w-eJlpLmRjTKHGd-Mj6ab_JDJmn4Yw46NRulROfltOWy3NQZ05KMQNRXo5dR5EH-6QDqxZibWsweXJYweB9q91d_1mX2AZKcgYUg18OYZ/s1600/StayFrosty.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhm8QwsJHzrKSrZTGYQdnBM2gX96O-Mm2Ee3F5w-eJlpLmRjTKHGd-Mj6ab_JDJmn4Yw46NRulROfltOWy3NQZ05KMQNRXo5dR5EH-6QDqxZibWsweXJYweB9q91d_1mX2AZKcgYUg18OYZ/s1600/StayFrosty.png" /></a></div>
The image above shows the effect at about 50% which is where I like it.
<h5>The ShaderEffect</h5>
You can download the library complete with the HLSL source and the ShaderEffect class implementation <a href="https://bitbucket.org/Diagonactic/diagonactic.wpf/src">from its BitBucket repo</a>. The effect is simple but could still use some tweaking. It's "good enough" at the moment, but if you have any improvements, I'll gladly accept them!<br />
<h5>How it works</h5>
Very loosely, it works by layering a darkened and lightened version of the input offset by the size supplied in the size parameter. That's not really how it works, but that's how it appears to work when you use it. In addition to the two layers, the original sampled pixel is mixed in so that parts of the font aren't wiped out as the shadow and glow are offset. Due to pre-multiplied alpha and some blending, the main part of the font remains identical to the input color and the glow/shadow are lighter and darker. As I said, it's very simple.
<h5>Usage and Limitations</h5>
Reference the library and add the effect to any text element. It'll work with other elements, but the results may not be all that great. I designed it to work with small text since it was aimed at the code panel in Visual Studio but it works fine on large text, as well, provided you tweak the effect settings accordingly. The effect is applied vertically, only. It would be easy to change to allow an angle specified and, in fact, the earliest working prototype I had did this, however it nearly doubled the instruction count and performance suffered. I wanted this thing to work well with software rendering and to be as inexpensive as possible, so that's a trade-off I took and it's the other reason it will start to look less than ideal with large text.<br />
<ul>
<li><b>Size</b> - This is the size (roughly) in pixels of the effect. This is really the offset of the two layers. The glow layer is usually far more visible than the shadow layer so changing this might have the illusion of only increasing the glow portion of the effect. It's a double, so any fraction is fine. If you use a negative number, the chisel will appear as a bevel. The ideal value will vary depending on the target size of the text. The effect above uses a value of 0.5f which is also the default.</li>
<li><b>GlowIntensity</b> - This increases the intensity of the glow portion of the layer. Remember that the actual color is just a combination of glow and shadow, so increasing the glow will similarly increase the overall brightness of the font, too. Sorry about that. The default value for this is 1.0f</li>
<li><b>ShadowIntensity</b> - Same as Glow, but for Shadow. If Glow and Shadow are the same value, the font in the middle should be about the same color as the input.</li>
<li><b>MixDivisor</b> - It's a terrible name and I will be changing it. At the end of the HLSL, the input pixel and the new values calculated for the two layers are divided by this number. 3.0f is the default and causes the text to barely blend with the background. Increasing this value will make the text and the effect blend more with the background. Use the Intensity and this value together to get the desired result. The text in the picture above uses a value of 3.5 on a background of #FFAAAAAA.</li>
</ul>
The background and foreground of the text play a huge role in how the effect looks. As you can see from the picture above, the darker text has almost no effect. This was what I wanted for XML comments in my solution. Colors where the values of R, G and B are all very close to the 0xFF or 0 have little to no effect. Colors on backgrounds that are close to 0xFF and 0 have little to no effect. Placing the text on a variable background works excellently for some and terribly for others, so you'll need to experiment with it. Play around with MixDivisor and Intensities and you should be able to make it work, but like all things related to shaders, it works by optimized optical illusion.<br />
I also strongly recommend a font with some meat on it. Mac OS renders fonts differently than Windows. They're chunkier and cleaner. Google mactype if you want similar results in Windows, or use a very high-quality OTF font. I use Source Code Pro Medium, free from Adobe, and it works quite well.
I'll write up some more later explaining the effect in detail. I won't waste much time on the C# code required to use the shader, it's very simple.<br />
As far as the Stay Frosty extension, I'm writing it in some very limited free time. It's nearly done except for a few issues around settings that I'd like to sort out.<br /><br />
You can download the entire library with its code <a href="https://bitbucket.org/Diagonactic/diagonactic.wpf/src">from its public BitBucket repo</a>. It has the path to fxc.exe hardcoded in the pre-build. You'll need to ensure you have the Windows 8 SDK or the DirectX SDK and you'll need to modify the path accordingly. That was a time trade-off for me.Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-13284513356648292502015-05-11T20:40:00.000-04:002015-05-13T20:56:45.359-04:00Reliving the Past - The Visitor Access KioskEvery once in a while I like to google myself as past posts will show. One thing I have never looked for was a small promotional video that Global Crossing had sent to a few folks about a project I handled for them. We had a new office being built in Rochester, NY around the time that we were involved in a program with Microsoft for helping them improve the developer experience around Lync.
<br />
<h5>
Before this sounds horribly boastful</h5>
This project was a defining moment for me as a developer. I had, up to this point, been mostly focused in Web/Database applications that were used internally. I had quite a few wins in this area, but nothing that ever extended beyond the walls of the company (beyond a project a few years ago that was sold to a vendor of ours whilst we were in the throws of Bankruptcy).<br />
Stupidly, I had never thought to just "google it". At the time, Global Crossing didn't put videos on YouTube, and I'm not entirely sure how it even got there. It was a *monumental* task and I was *not* the right developer to be doing it, but I love code and I love solving hard problems (from time to time). In this case, the VP of our department (an awesome guy who's name I'm omitting because I haven't asked for his permission), asked if I could create a kiosk that would allow a visitor to contact someone in the office from a secure visitor lobby, have a video/voice conversation with them (one way video, inbound to the employee only), would print a one-day-use badge with their picture, name and relevant information and give them a way to sign out when their visit was over. This was to replace the need for a dedicated receptionist, or a pen/paper log-book.<br />
<h5>
Some implementation details</h5>
Unfortunately, the only copy of the original whitepaper has been removed from Microsoft's site -- it was for Office Communicator 2007 R2 (technically we ran it on this version, but it was developed targeting the pre-release of R2), so I'll include that here<br />
I implemented it as a WPF application, touch-only (at a time when a 1024x768 SAW touch screen was $1500). I used a barcode scanner, a simple label printer attached to a USB -> Ethernet device, a Logitech camera, and a ThinkCenter PC running Windows XP (stripped of everything unnecessary, including the Explorer shell. All of the code was my work (that's a statement of fact, not pride -- I would be embarrassed by that code today). My good friend and former coworker George Morell and I handled the operating system hardening -- when you have a guy at your disposal that can tell you the location of every obscure OS registry setting from Windows 2000 to Windows 8.1, you defer to that incredible expertise. And our security guys did some network hardening to ensure that if someone took a sledgehammer to the device and grabbed its ethernet port, it'd be worthless to them.<br />
<br />
Most of the application was written in C# (.Net 3.5, I think, I remember thinking ... "generics?" ... like "templates in C++?"... no ... not exactly) with some bits in C and one really nasty bit in C++ (it was the camera interface). It was not a difficult thing to write, but at the time, it was a difficult thing fore <b>me</b> to write.<br />
Without further ado, the Visitor Access Kiosk.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/aV3kaBo4GQM/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/aV3kaBo4GQM?feature=player_embedded" width="320"></iframe></div>
The guy in the video is my former boss, he worked out of the Rochester, NY office where it was filmed. I had nothing to do with the video or its upload to YouTube. Who knows, it may get taken down, but for the time being, I wanted to have a link to it that I could refer to.<br />
<br />
I had to laugh while watching it. The UI is comical today, but keep in mind that this was developed at a time before Windows 7 was released. Windows Vista and Windows XP's "chrome" were the inspiration behind the design. And I had some fun with XAML. Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-45002541891444580402015-04-06T03:30:00.000-04:002015-04-07T12:44:30.722-04:00FIX: Fix Plex Media Server Remote Access / Unable to Connect to Plex from The Internet (code=-52)<h5>Symptoms</h5>
You've opened the ports on your firewall and verified that they are open via an open port check tool on the Internet.<br />
You've tried <a href="https://support.plex.tv/hc/en-us/articles/204281528-Why-am-I-locked-out-of-Server-after-password-reset-or-device-token-removal-">Why am I locked out of Server after password reset or device token removal</a>.
You've successfully pulled out many locks of hair (optional).<br />
You've even turned on UPnP on your router, just to test (you can turn it off, just make sure the port is opened and forwarded correctly).<br />
<h5>Error 52 / code=-52</h5>
Your log has this line:
<pre>
WARN - MyPlex: Invalid response when mapping state (code=-52)
</pre>
<h5>What's happening</h5>
Some part of the Remote Access registration process fails if the host operating system is configured to use Jumbo Frames. If you don't know what that means, that's OK. I'm running an OpenSuSE Linux box as my Plex server and I am not a Linux expert, so I can't even really tell you where to go (if you know, leave a comment, please!).<br />
For OpenSUSE, do the following:
<pre class="brush: bash">
$ sudo yast
... Your password ...
</pre>
In yast, go to <b>Network Devices</b> and select <b>Network Settings</b>.<br />
Select each network adapter and choose <b>Edit</b>.<br />
Go to the General tab and change the MTU to a value that will prevent Jumbo Frames (1400 is a safe test value for most people, but you can google your optimal based on the kind of broadband you subscribe to).<br />
Exit and try it all again (I restarted the plex server as well, just for good measure).Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-72417567813376579932015-03-30T23:03:00.000-04:002015-05-13T20:56:10.884-04:00Nermerle, Nitra, Parsing and Functional Programming<h5 id="functional-programming">
Functional Programming</h5>
I started looking into Functional Programming about ten years ago and when F# 1.0 came out, I decided to start paying attention again. I initially explored it as part of a project I had embarked upon to parse a DSL and provide a suitable development environment that a non-developer could use to customize my application. It was a lofty goal and ultimately was scrapped in favor of a much stripped down model that was driven via web forms and a small subset of the features, but the DSL was still available for more complex needs.<br />
<h5 id="parsing-hlsl">
Parsing HLSL</h5>
Recently that need arose again. I've been messing around with WPF Shader Effects and 2D sprites. No, I'm not developing a game (sorry, no real interest here). The biggest frustration for me has been the lack of debugging information available. My usual process for exploring new languages is a combination of reading/research and poking/reverse engineering. I was the kid that took apart every piece of broken/outdated electronics growing up. After picking up "The Basics", I usually dive right in and write some (terrible) code to see what it does. I'll also read/step through open source code that I think I conceptually understand to validate my understanding. <br />
Hlsl makes my learning process more difficult. I can't step through the code; I can't watch. The way it encodes colors into float4 values is sufficiently different from what you'd expect that predicting what the value would be requires you to know about things like Premultiplied Alpha, mapping textures to float values between 0 and 1. Your college calculus class will come in handy even if you did poorly as most things related to programming/math require you to have <i>a good understanding</i> of how it works even if you don't remember the details. <a href="http://www.wolframalpha.com/">Wolfram|Alpha</a> will will help you if you've forgotten the specifics.<br />
Built-in IDE tools provide barely any information when the HLSL files are rendered by WPF. I'm not sure how valuable they are outside of that, but they're useless here. The larger tools available from NVidia/Intel are overkill and mostly don't cover the areas I was hoping (the NVidia product simply didn't support most of its features on my particular GPU ... <i>Notebooks</i>).<br />
<h5 id="building-a-narrow-purpose-ide">
Building a narrow-purpose IDE</h5>
I started out exploring WPF ShaderEffects using the wonderful <a href="http://shazzam-tool.com/">Shazzam</a>. Unfortunately, at the time of this writing, downloading the tool from his site isn't possible but a slightly outdated version (in code only) can be acquired from <a href="https://shazzam.codeplex.com/">CodePlex</a>. I tweaked it a bit for my purposes and created a shader effect that I <i>mostly</i> liked. Getting to the fine details, though, was difficult and I knew I'd need more insight into what was going on.<br />
<h5 id="enter-nemerle-and-f">
Enter Nemerle, F# and Nitra</h5>
The first thing I wanted was better parsing of the code before compile. I went digging around and found a syntax highlighting definition for HLSL. The language has some unique elements, such as Vectors whos parts are accessed via array notation, or .xyzw .agbr notation. It's quite elegant, but my eyes weren't naturally parsing it coming from languages that didn't do that. <br />
The rabbit hole began there. The editor used by Shazzam was a way old version of that used by SharpDevelop. It's since been updated and the core text editor can be grabbed independently from NuGet as AvalonEdit. It works quite differently from the version offered by Shazzam, and syntax highlighting is much more powerful. So I created a Syntax Highlighting Definition for AvalonEdit that would color the .argb items as a variant of gray, red, green and blue and used shades of gray with underlining for the wxyz (I'll get the definition out on GitHub when I have a chance and link to it here). <br />
The Shazzam tool was also missing Intellisense and some of the code signals that Visual Studio provides, so I went looking for a solution there knowing that my old friend F# was probably going to find a way back into my life for this one. <br />
Surprisingly, it was <a href="http://nemerle.org/">Nemerle</a> and the [Nitra Project from JetBrains]|(<a href="http://blog.jetbrains.com/blog/2013/11/12/an-introduction-to-nitra/">http://blog.jetbrains.com/blog/2013/11/12/an-introduction-to-nitra/</a>) that got my attention this time. <a href="http://timjones.tw/blog/archive/2014/11/24/getting-started-with-jetbrains-nitra">Tom Jones' Getting Started with JetBrains Nitra</a> specifically covered parsing HLSL! <i>Thanks for that!</i> <br />
It lacked the ability to understand shader2D and had no preprocessor support, but after adding shader2D, I now had a library that easily integrated with AvalonEdit to provide syntax highlighting failures and guidance for repair... All from a concise syntax file.<br />
I wanted something a little more. Inspired a while ago by <a href="https://vimeo.com/36579366">Bret Victor's Ted Talk</a>, I decided to see if I could create an IDE that would give useful feedback about how the program works while it is being developed. HLSL is a limited purpose language, and WPF ShaderEffects limitations limit it even further. It's the perfect target for an IDE that gives intelligent feedback about the program being written while it is being written. By providing sample inputs and the ability to provide custom inputs for the function, a person can see what the outcome of the code would be in various scenarios,... as close to live as possible.<br />
At this point, I'm building a new HLSL parser specifically targetting the limitations imposed by WPF to ensure that code syntax and <i>rules</i> are followed carefully.<br />
I'm then going to start evaluating ...<br />
<h5 id="enter-the-god-awful-visitor-pattern">
Enter the God Awful Visitor Pattern</h5>
I'm an OOP guy most of the time. The pattern fits well much of the time and most .Net programmers are comfortable with it. However, <a href="http://en.wikipedia.org/wiki/Visitor_pattern">The Visitor Pattern</a> hurts my brain.<br />
C# lacks <a href="http://en.wikipedia.org/wiki/Multiple_dispatch">Multiple Dispatch</a> without the DLR. I'm not interested in passing type safety all the way to runtime. The Visitor Pattern is the way around that problem. The pattern is relatively straight forward, but figuring out its intent by looking at the code is painful. F# and Nemerle have pattern matching capabilities and allow multiple dispatch. The resulting code is readable with a small reference to the syntax (Nemerle is easier to read coming from C# than F#, IMHO) and the amount of resulting code is substantially less.<br />
<h5 id="my-plan-for-next-time">
My Plan for Next Time</h5>
Unfortunately, this is not my day job. It's Saturday/Sunday Early Morning/Late Evening work. I have no code samples at the moment because I just moved my F# code to Nemerle (and it's so much easier to understand). I hope to spend some time with the parser next weekend when I will have a little more time to poke at it. If I get it to the point I want it, I'll write it up and thow it out on GitHub.<br />
Until then ... Don't panic!
<br />
<a href="https://github.com/rsdn/nemerle/wiki/Nemerle-language">Nemerle documentation</a> can be found here.<br />
<a href="https://confluence.jetbrains.com/display/Nitra/Syntax">Nitra syntax</a>, not terribly well documented, but somewhat easy to follow after going through the Nemerle documentation.Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0tag:blogger.com,1999:blog-3092647437783830192.post-62492822661768875232015-03-24T18:31:00.000-04:002015-04-07T12:41:55.935-04:00HOWTO: Add Code Syntax Highlighting to your Blogger / BlogSpot Blog<h5>Solving my own problems</h5>
Last year I decided to poke about in my blog again and managed to utterly destroy the template along with all of my past customizations. I've decided to take the time to figure this out and then document it so that I can do it again when I decide to "code in production" again. So in the typical, slimmed down style, here's how it's done.<br />
<ol>
<li>Get to your blog's dashboard (log in, select blog from blogger dashboard).</li>
<li>Select <u>Template</u> on the left hand navigation and <u>Edit HTML</u> on the screen that loads</li>
<li>Locate the <strong></head></strong> tag (side note for testing later: better to not put this in head because it will prevent page from displaying until loaded).</li>
<li>Paste the (hopefully now highlighted) code below. We're using <a href="http://cdnjs.com/">cdnjs</a> as the host for the .js scripts. I've read other instructions that suggest hot-linking directly to the author's site. I checked the install instructions on the site and he specifically asks you to copy the files locally.</li>
</ol>
<h5>Template Code</h5>
Here's the code I added to <head>:
<pre class="brush: xml">
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/scripts/shCore.min.js"></script>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/scripts/shBrushPlain.min.js"></script>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/scripts/shBrushCSharp.min.js"></script>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/scripts/shBrushBash.min.js"></script>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/scripts/shBrushJScript.js"></script>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/scripts/shBrushXml.js"></script>
<script type="text/css" src=""></script>
<link href="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/styles/shCore.min.css" rel="stylesheet" type="text/css" />
<link href="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/styles/shThemeMidnight.min.css" rel="stylesheet" type="text/css" />
<script type="text/javascript">
SyntaxHighlighter.config.bloggerMode = true;
SyntaxHighlighter.all();
</script>
</pre>
<h5>And the Html</h5>
I've included the exact HTML used to render the above. Note that the "<"'s are all replaced with &lt;. That's a limitation I've decided to accept rather than using the alternative <script> block style.
<pre class="brush: xml">
<pre class="brush:js">
&lt;script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/scripts/shCore.min.js">&lt;/script>
&lt;script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/scripts/shBrushCSharp.min.js">&lt;/script>
&lt;script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/scripts/shBrushBash.min.js">&lt;/script>
&lt;script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/scripts/shBrushJScript.js">&lt;/script>
&lt;script type="text/css" src="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/scripts/shBrushXml.js">&lt;/script>
&lt;link href="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/styles/shCore.min.css" rel="stylesheet" type="text/css" />
&lt;link href="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/styles/shThemeMidnight.min.css" rel="stylesheet" type="text/css" />
&lt;script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/SyntaxHighlighter/3.0.83/scripts/shBrushXml.js"></script>
&lt;script type="text/javascript">
SyntaxHighlighter.config.bloggerMode = true;
SyntaxHighlighter.all();
&lt;/script>
</pre class="brush:js">
</pre>
<h5>Customizing for your purposes</h5>
<a href="http://alexgorbatchev.com/SyntaxHighlighter/">SyntaxHighlighter</a> can be customized. At this point, there's <a href="http://alexgorbatchev.com/SyntaxHighlighter/manual/brushes/">many available syntaxes</a> to choose from. Find the brush you want and visit <a href="https://cdnjs.com/libraries/syntaxhighlighter">the cdn.js page for SyntaxHighlighter</a> to get the link.<br />
There's also <a href="http://alexgorbatchev.com/SyntaxHighlighter/manual/themes/">plenty of themes</a>. Find what you want, visit the cdn.js page for a link and add it to the scripts above.Matthew S. Dippelhttp://www.blogger.com/profile/09065753238713480937noreply@blogger.com0