HOWTO: Strong Name Sign a .Net Assembly With a Yubikey/Smart Card and Code Signing Key with AssemblyKeyNameAttribute
Saturday, September 16, 2017Or, as I’ll refer to it from here on out, Part 2
UPDATE: I ran into a problem that caused all of this to stop working, so I’ve updated the post with what I had to do to resolve that. See Troubleshooting below.
If you haven’t read Part 1, you’ll want to do that now. There’s a few things there that I’ll be referring to here. I’d also recommend going through the steps up to the point of changing your project in Visual Studio. You can skip that part and continue on here.
Introducing AssemblyKeyNameAttribute
Something I only hinted about in the last post on this subject was the AssemblyKeyNameAttribute
(go ahead, click that link, see how sad that documentation made me).
It’s the obvious way to handle signing and being a part of your code, makes it easy to solve many of the “Bad News” parts at the bottom of the last post. You could simply create a new build profile and #if/#endif
out the entry in AssemblyInfo.cs
0.
Unfortunately, I couldn’t make it work. And then, out of nowhere, it did. Shortly after writing and publishing the last post, I decided to add that attribute back in. I ran compile, received my three PIN prompts and … it built. This was odd, since all past attempts yielded a “Keyset not found” error. I figured that my checking the signing box with delay-sign enabled probably yielded the success, so I undid everything. After a lot of google searching, which yielded a whole mess of Stack Overflow and MSDN Forums questions from a myriad of users who hadn’t figured it out, I ended up with the last post.
The long and the short of it is, most of it is unnecessary. The two necessary parts are what follows.
The trick to getting AssemblyKeyNameAttribute working
The most important part is the sn.exe -c "Microsoft Base Smart Card Crypto Provider"
. This must be run or you’ll get a Keyset not found
error on build.
However, there’s an ugly catch-22. Running sn.exe -c
, if it ends up changing the CSP, can only be done as an elevated user. However, that same elevated user cannot access the key in the personal store of your non-elevated account. So simply adding this to the pre-build or post-build and running Visual Studio under elevation results in the exact same Keyset not found
error on build. Good error messages can mean the difference between solving the problem and … well, that last post should give you an idea.
Unfortunately, it doesn’t look like the sn -c
invocation persists after reboot, so we’re minimally stuck with having to run this command manually once after reboot, or find a way to elevate during build just to run that command. I took the later option.
To get this working, you’ll need the script located at this gist . The only requirement is that you have the .Net Framework SDK 4.6, 4.6.1 or 4.6.2 installed. It uses it to get the path to the sn.exe
file so that it’ll work regardless of how your system is configured.
First, make sure your execution policy for CurrentUser
is set properly:
Set-ExecutionPolicy -Scope CurrentUser RemoteSigned
You can replace RemoteSigned
to Unrestricted
if you’d like to be a little less secure (the script in the gist is signed, but if you modify it and don’t re-sign it, it’ll fail on build).
Now go to your project in Visual Studio and head over to the Build Events tab. If you followed the last post and made changes, you should remove everything from the Post Build except for the SignTool.exe
calls. This won’t authenticode sign your resulting .dll
or .exe
– it’s only strong name signing.
Add a pre-build event as follows:
PowerShell -NoProfile -Command "\path\to\Set-SmartCardCspOnBuild.ps1"
Now open up the Properties\AssemblyInfo.cs
file and add the following line:
[assembly: AssemblyKeyName("Your Key Container Name")]
If you aren’t sure what your key container name is, consult the linked post, above, about how to find it.
At this point, simply run a build. You’ll notice once build starts, you’ll get an elevation prompt. That’s happening in the PowerShell script. I haven’t figured out a way to get sn.exe
(or any other tool) to display he actual CSP that’s in use. Ideally, it’d be nice if it didn’t have to switch it every build to avoid the elevation prompt, but this works until I can find a better way.
Troubleshooting
Just as soon as I wrote this on Saturday, the build process stopped working on Sunday. The only thing I had done was switch sn.exe
back to the default crypto provider to generate a key so that I could strong name a library that I installed via NuGet which wasn’t already strongly named. I switched it back when I was done, but the build would not work.
Apparently, the sn.exe
you use matters!
I had tried to generate the key using sn.exe -k
and received an error (the specific error escapes me, now, but I recall that it sounded like the “keyset does not exist” error, which is absurd since I was trying to generate a keyset!). I had noticed that ilasm
was being picked up from one version of the .NET Framework and sn.exe
was being picked up from another, older version, so I modified my path to use the sn.exe
of 4.6.2. No dice. On a hunch, I realized that the Smart Card Crypto Provider might not be able to generate a key This makes sense since the -k
command is used to generate a key and put the key set in a file. Smart cards can generate a key but can only export the public key part of the key set. I ran sn.exe -c
to reset the CSP back to the default (which I assume is the Base Crypto Provider v1). I ran sn.exe -k
and it generated the new key.snk
for that library without issue.
Then, I re-ran the build. I received no PIN prompt and an error that the keyset was not found (with the name of my container). Oops, I forgot to switch the CSP back! So I went back to my PowerShell prompt and pointed sn.exe
back to the Smart Card Crypto Provider. Right when I did it, I realized this wouldn’t work – I was already using the script, above, and this should have happened during pre-build!
And, no surprise, it still didn’t work! After a few choice four-letter words, I launched a fresh PowerShell Administrator prompt (which reset my path) and reran the sn.exe -c "Microsoft Base Smart Card Crypto Provider"
command. I didn’t expect this to work, but … it did.
There were two differences in the second run. My PowerShell Profile injects the Visual Studio 2017 developer command prompt environment which, on my “developer laptop”2, resulted in the sn.exe
from the .NET Framework 4.6.1 SDK (specifically the x64 version) being picked up.
Here’s the thing that makes no sense. I used the x64 version of sn.exe
from .NET Framework 4.6.2 when I reset the CSP and it succeeded in breaking the build, but setting it back with that same version didn’t fix it. My PowerShell script, above, uses the x86 version from 4.6.2 and that didn’t fix it3. When I ran the x64 version from 4.6.1 to set the CSP to the Smart Card Provider, it fixed it.
I’m not convinced it has anything to do with the 32-bit vs the 64-bit version of the sc.exe
tool, but I think the .NET Framework version mattered. So, at least until I have time to reproduce the issue, it appears that resetting the CSP using sc.exe -c
with either the 4.6.2 or 4.6.1 version will break things, but only the 4.6.1 version (possibly only the x64 version of that) can be used to fix it.
This is made even more fun by the fact that I can’t find a way to have any of the tools indicate what provider they’re using to perform these operations, which would make it really easy to see what is going on (and allow for a more intelligent build script).
In Closing
Good grief this is a hassle. Every signing tool that I’ve used from Microsoft does things slightly differently. All of the .Net Framework tools inherit sn.exe
’s CSP (csc.exe
and ilasm.exe
, though I’ve been unable to get the latter to work with my code signing key4).
From here, I’m going to find a way to discover what CSP sn.exe
is configured for so that I can improve the build script and avoid setting that value if it’s already set correctly. I also recall from past experience with Smart Cards that the OS is supposed to support caching of PIN entry with configurable time-outs. There’s some forum posts ut there that indicate this may be broken in Windows 10 after a patch that was released in June, and since I’m running insider previews, I don’t really have a way of backing that patch out, but I’m not so sure I even have things configured to allow caching, so I’d like to get that solved as well. When I do, I’ll post an update.
0 Of course, this was always possible with the modifications we did to the .csproj
file, but not many of us enjoy messing around with MSBuilds wonky XML-based language. Even Microsoft had a (short) moment of clarity there. Unfortunately, it was too big of a mess to untangle.
1 I only use generated keys for strong naming libraries that aren’t mine. This is partly because it feels wrong to sign a library with my code signing key that isn’t my library, even though the purpose of strong naming isn’t to validate ownership/authorship.
2 Read that as “it’s pretty banged up and things aren’t always what they seem” on this device, i.e. ilasm.exe
being picked up in a location in my path that isn’t the same as sn.exe
, so there could be other things going on here.
3 The reason I specifically picked the 32-bit version is because Visual Studio is a 32-bit application and I assumed that was the right version to use. I’m still pretty sure that’s the right version, but evidence would indicate otherwise.
4 Don’t even get me started on vsixsigntool.exe
, which is the Visual Studio Extension equivalent to signtool.exe
. Yeah, it works differently. So far, only signtool.exe
works somewhat intelligently. The others require a lot of trial and error if you want to do things in the most secure manner (not storing the miserable keyset in a file in your filesystem)
HOWTO: Strong Name Sign a .Net Assembly With a Yubikey/Smart Card and Code Signing Key
Friday, September 15, 2017Code Signing != Strong Name Signing
Note: This post was updated on 9/16/2017 with some corrections. In addition, a much easier way was discovered and that's been written about in Part 2. You'll still want to read this part since there's a lot here that isn't covered in the follow-up, but skip the Visual Studio related steps.
First things first, strong naming and Code Signing and Strong Naming serve two different purposes. Code Signing (referred to by the trademarked “Authenticode” in the Microsoft world) verifies a library or executable originated from a person or organization.
A Code Signing key validates identity. Code Signing keys are most similar to EV certificates (and, in fact, require the same kind of validation involving a notary, lawyer or CPA and a bunch of documentation proving your name, address, phone and other things).
A Strong Name in .NET adds a signature to a library or executable that ensures that when a program references that library or executable, that it’s getting the right one. When the .Net Framework validates a library’s strong name, it doesn’t care what the origin of the signature is, or its trust chain. It simply cares that the key is right. So for all practical purposes, there’s really no great reason to use your code signing certificate to generate the strong name.
So … why do it then?
My reasons boiled down to a few things. First, I like to strong name my libraries if they’re going to be put out on a NuGet server (either the public one or the one I use internally at work or at home). The main reason is that I use some of these libraries in projects that require strong names, so it’s an added convenience not to have to ildasm
/ilasm
sign them on a one-off basis. It’s also helpful to others who might use those libraries.
The problem is that I can be a little absent-minded and on at least one occasion, I published a strong name key to a public git repo. Whoops! I know and follow best practices with certificates that I use, but for whatever reason, I treated this “generated on-the-fly” keypair with reckless abandon.
When I acquired my code signing key, I understood the value of what I was getting. My key would serve as a positive identity of me when anyone used an executable signed with it. I purchased a Yubikey to store the key. The Yubikey is basically a USB Smart Card device. Smart Cards work by storing the private key in non-exportable storage and performing all cryptographic operations on-device.
I generated the CSR on a Linux live CD and ensured I made a backup only to an encrypted storage medium protected by a different password than the PIN on my Yubikey. In theory, at least, that private key has never seen the Internet, and will never exist in storage or memory of any machine I sign software on.
All of that trouble also means that if I use that same key for strong name signing, I can never accidentally publish the private key in a repo. Like any good security process, the best way to prevent a leak is to make it impossible to leak.
Strong Name Signing with a Yubikey
To strong-name sign, you use the sn.exe
tool. Unfortunately, it’s not exactly straight-forward to use this tool with a certificate installed to a Smart Card. It’s, apparently, done so infrequently, that the documentation gives very few hints as to how to actually accomplish it.
Step 1 - Get the Smart Card Crypto Provider
The first thing you need to do is point sn.exe
at the right crypto provider. By default, the sn
tool uses the Microsoft Base Cryptography Provider
, which won’t find the key on your Smart Card. By default, Windows 8+ uses the Microsoft Base Smart Card Crypto Provider
for smart cards, but if you’ve installed other smart card providers (OpenSC), this may be different, so we’ll verify that.
Launch an Administrator PowerShell prompt – keep it open because we’ll use it later – but for now, run the following:
CD HKLM:\SOFTWARE\Microsoft\Cryptography\SmartCards
gci *yubikey*
Look for Crypto Provider
. That’s the provider for your yubikey. Open up a (non-administrator) Developer Command Prompt and cd
to the folder that has the library you want to sign. Run the following command:
$ sn.exe -c "Microsoft Base Smart Card Crypto Provider"
Replace Microsoft Base Smart Card Crypto Provider
if the Crypto Provider
, above, was different.
Step 2 - Get the Key Container Name
This switches the default provider to your Smart Card. Now for the tricky part. We need to find the Key Container name for your code signing key. I’ve written a script (signed, of course) that will print the key container for your code signing key, provided it’s stored within your user’s Personal key store (which, AFAIK, it needs to be there, so if you don’t have it stored there you’ll need to figure that bit out).
Download the gist for the script here. Save the file and run it.
.\Get-CodeSigningKeyContainer.ps1
It’ll output something along the lines of:
Code Signing Key Located
Subject: CN=Matthew S. Dippel, ...
Thumbprint: 983894AA3EB7BEA35D01248F6F01C3A64117FA66
Container Name: 'c0f031c2-0b5e-171b-d552-fab7345fc10a'
Do a sanity check on the Subject/Thumbprint to make sure you’ve got the right key and if you’re happy with it, grab the text between the apostrophes on the last line. In the future (or if you want to use it as part of a script), you can run it with the -Quiet
parameter and it’ll just spit out that value.
Step 3 - Generate a Signing Key that Contains Only the Public Key
Strictly speaking, I’m not sure if this is actually required, but it’s the only way I could get it to work. I’d like to find a workaround, and I’m guessing one exists possibly related to the AssemblyKeyContainerName
attribute, but much like this whole process, it’s poorly documented and I couldn’t make it work. If I figure it out, I’ll update this post accordingly.
In your protect folder (or solution directory – really, the location doesn’t matter much), run the following command in the Developer Command Prompt:
sn.exe -pc "c0f031c2-0b5e-171b-d552-fab7345fc10a" key.snk sha256
Replace the c0f031c2-0b5e-171b-d552-fab7345fc10a
with your container name from the PowerShell script above.
What we’re doing here is asking the Strong Name tool to produce the file key.snk
with only the public key (after all, we’re using a Smart Card that has no way of providing sn.exe
with the private key). We’ve told it to use SHA-256, explicitly since (I think) it defaults to SHA-1 which is considerably weaker.
Step 4 - Tell Visual Studio to Use The Key and Delay Sign
This is the lousy part. We have to delay-sign, which is supposed to kill the debugger. We’ll update the project to finish the signing process once the build is finished, but I’m not sure if that will add some steps to debugging (I just figured this out this evening and haven’t gotten that far in testing, yet). Right-click the library project and choose Properties. Go to the Signing tab. At the bottom, check the box that says Sign the assembly. In the drop-down box, pick <browse> and select the key.snk
you generated, above. Then, check the box that says Delay sign only.
Build the project. You may notice a small hang after build starts, followed by your PIN entry prompt. Provide your smart card’s PIN and build will continue.
If you see it sitting there for more than a few seconds, hit ALT+SHIFT+TAB and you’ll see your PIN entry prompt pop up (ALT-TAB works, too, but every time this has happened to me, the PIN entry dialog has been the last item in the window list).
Step 5 - Signing the Library Manually
We’ll automate this as part of the build, shortly, but it’s helpful to do it by hand, once, since you’ll see the output for the signing operation in the command prompt and won’t have to hunt through build output. Go back to that Developer Command Prompt and cd
to the output folder that has the .dll
or .exe
file that was generated from the build.
Run the following command:
sn.exe -Rc MyLibrary.dll "c0f031c2-0b5e-171b-d552-fab7345fc10a"
Note the change in order between the -pc
and the -rc
commands. This one does the file first. What you’ve done here is Re-signed the file with the signature. If everything worked, you should have had a window pop up asking for the PIN from your smart card.
Step 6 - Bonus - Code Sign the Library/Executable
As I mentioned, above, strong name signing isn’t Code Signing. If you want your library to be Authenticode signed, you’ll need to do that separately. If you’ve got a Comodo certificate, you can use the following command in the Developer Command Prompt. There’s a few ways to do this, but I use the following command:
SignTool sign /fd sha256 /tr http://timestamp.comodoca.com/?td=sha256 /td sha256 /as /v MyLibrary.dll
When you’re using a Code Signing certificate on a Yubikey, provided there’s only one code signing certificate in your certificate store, there’s no need to point it at the specific certificate. You’ll see output indicating that the library/executable was signed properly.
There’s one thing worth noting here, though. If you need compatibility with Windows Vista or Windows XP, you need to sign the executable twice. The above method will only work for Windows 7 and above. To sign in a manner that is compatible with Windows XP and above, yet still includes the more secure signature for Windows 7 and above, use the following commands:
SignTool sign /t http://timestamp.comodoca.com /v MyLibrary.dll
SignTool sign /fd sha256 /tr http://timestamp.comodoca.com/?td=sha256 /td sha256 /as /v MyLibrary.dll
The first command signs in a Windows XP/Windows Vista compatible manner, the second is identical to what we did, above.
What About Non-Comodo Certificates? A note about Timestamp
I’m really not sure; I don’t have one. The issue is with that /t http://timestamp.comodoca.com
switch. You can leave it off but it’s an exceptionally bad idea. Your code-signing certificate expires at some point, or you may lose the private key and need to get another one issued, which will revoke the current one. The certificate you’re using isn’t all that much different than one that is used for EV domains. When those expire, you replace the certificate on the server and everything’s fine. You can’t, however, replace the signatures on all of the things you’ve signed that have been copied onto other peoples’ machines. To address this situation, a timestamp service is used – that’s what this URL points to.
I assume the COMODO timestamp service is meant to be used with COMODO certificates. Chances are good that the company that you purchased yours from operates their own. Consult their site to see what the appropriate values are for that (bearing in mind that there are two kinds of timestamp services that require slightly altered parameters regarding /tr
and /td
).
There is also one out there that allows for its use provided you make very few requests. Whatever timestamp service you use, make sure you consult the support area to determine what the request limits are.
Finally - Automating it All
Having to do all of these steps every build is a bit much. Let’s add some post-build steps to automate it all. Go back to the project properties and choose Build Events.
Put the following into the Post Build (see the note below to make sure you change the right things):
"C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.6.2 Tools\x64\sn.exe" -c "Microsoft Base Smart Card Crypto Provider"
"C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.6.2 Tools\x64\sn.exe" -Rc "$(TargetPath)" "c0f031c2-0b5e-171b-d552-fab7345fc10a"
"C:\Program Files (x86)\Windows Kits\10\bin\x64\signtool.exe" sign /fd sha256 /tr http://timestamp.comodoca.com/?td=sha256 /td sha256 /as /v "$(TargetPath)"
IMPORTANT: Those are the paths to the files on my system. Check the path to sn.exe
and signtool.exe
and make sure to replace the Crypto Provider
and put in your key container. Mine isn’t going to work for you.
Save everything.
The good news is that for every other project, all that’s required is running the sn.exe -c
and sn.exe -pc
(Step 1 and 2) once for each project and pasting whatever you ended up with above in the project properties. It’ll make repeating this for anything else you’ve got very easy. It’s also portable between machines (provided the paths are the same, though you can replace those with environment variables for Program Files and such). The key container name will be the same on other machines (there’s some caveats here that relate to having more than one copy of the key or having more than one smart card which I ran into, however, you could use the Key Container script above to get the container name on every build, too).
There’s a bit of bad news, though:
- You’re going to get prompted for your pin not once, not twice, but three times on every build. I’m not aware of any functionality that allows the OS to cache this operation, but if I find it, it’ll be the first thing I fix since that’s obnoxious.
- Your project will not build without your Yubikey or Smart Card. This also means that if your project is open source, people downloading your code will get build errors. Obviously, you don’t want strangers to be able to sign your code with your key, but you do want them to be able to build an unsigned version. Make sure you add a note on how to work around this issue to your
readme.md
file.
Once you’re all done, build the project and type that PIN three times.
Troubleshooting
I’ve had a pretty terrible time getting this to work, and ran into a few gotchas. These are from memory and may not be correct, but I’m leaving them here as things to try if you get stuck
It probably matters if you’re running elevated
I had a few projects in the past that couldn’t be debugged unless Visual Studio was launched as an administrator. I’m fairly certain that the act of elevation will cause problems in locating the certificate in your personal user store, which is why I specified “A non-administrator Developer Command Prompt” above. If you can’t get the project to build and it complains about not being able to find the key, make sure you’re running without UAC elevation as the user who has the key in their personal certificate store.
More Than One Smart Card
My laptop has a TPM and for convenience, I created a Virtual Smart Card from the TPM module (it’s a cool feature that makes your TPM emulate a Smart Card and it basically negates the need for having a Yubikey). The problem is that the Virtural Smart Card will be the one that’s selected, not the Yubikey, when you set sn.exe
to use the Smart Card Provider if the two smart cards share the same provider. I’m sure there’s a better workaround, but since I have a Yubikey, I simply deleted the TPM Smart Card.
HOWTO: Import Keybase.io Public Keys to SSH authorized_keys
Saturday, July 29, 2017A little while back I was looking for a way to add a handful of users to the authorized_keys file on some test servers.
This server necessarily required the existence of only one account that when troubleshooting was required, would be used to login/troubleshoot. These servers would be rebuilt every morning and it would have probably been fine to share a password and just login with shared credentials, but the security guy in me is allergic to enabling Challenge/Response authentication. The alternative – sharing a public/private keypair among users – is also a huge no-no0.
Unfortunately, where public/private keys were in use, they were generally generated by the users themselves – one of the perks of being at a dev shop with a bunch of folks who seriously know what they’re doing is that they have generally done this ‘correctly’, however, we didn’t have a central server that stored a record of the public keys for easy distribution.
Another side-effect of being at a dev shop is that many of the users were Keybase users. Unfortunately, Keybase keys are PGP keys, not SSH keys and the two key formats are not inter-changeable. Worse, still, is that they’re really not designed for the same purpose. In the GnuPG world, a key used for authentication would almost always have a sub-key for that purpose. Having been using my keybase key for login to SSH for a while, I’ve had a script (albiet, one that only works with gpg v1) to automate exporting the public/private keypair, making it easy to get the public key to the server with a simple ssh-copy-id
, but what about when I have a few users I want to provision without ever handling their private key? I couldn’t find a good reference for doing that so I figured it out on my own.
Importing a GPG public key without the private key and without installing the keybase client
I wrote a shell script, located here, if you want to skip the details and just run it.
Simply login as the user you wish to add an authorized key to and:
chmod 770 ./authorizePublicKeybaseId.sh # only needed the first time
./authorizePublicKeybaseId.sh <id> # where ID is the keybase ID
It requires GnuPG 2 to execute (at least version 2.1.11) because it relies on a feature added in that version.
The script works by grabbing the public key via keybase.io’s public API (beta) and calling GnuPG 2 with the --export-ssh-key
(forced with the “!”) to convert the key from GnuPG public key format to SSH public key format.
Because various distributions’ packagers install gpg
in different ways, there’s a few checks to figure out which gpg
binary is version 2 (often it’s gpg2
) and a check to ensure the v2 binary is at the right minor/patch versions to successfully run the script. I also discovered some odd differences in the way that GnuPG 2 behaves between a few distributions – sometimes returning the 32-bit fingerprint rather than the 64-bit fingerprint, so I take an extra step to get the 64-bit fingerprint with some awk
parsing.
Currently, this only handles grabbing the public key and it does so without touching the private key (which is something that requires a lot more delicate handling). I’m working on a script to download/import the private key (as well as password protect both the ssh private key and protect it in the GnuPG database). I’ll post that as soon as I’m comfortable that it’s somewhere resembling “safe”, but for the time being, there are several scripts out there that allow you to do this and I’ve tested a few of them against the method I’m using here. They all have worked.
0 I sort of hope I don’t have to explain why, but one big reason is that if one of those employees leaves the company, the shared credential has to be destroyed and removed from every host and a new one has to be issued to all of those users. If one uses
Resetting the Visual Studio Experimental Instance Visual Studio 2010-2017 via PowerShell
Wednesday, July 5, 2017There’s a handful of things that you have to do frequently enough when debugging a Visual Studio extension that it becomes almost routine, but not frequently enough for you to actually remember the exact shape of the command you need to run.
Since I got horribly tired of having to hit up Bing every time I needed to remember the specific command, I decided to document some of them here.
The TL;DR; - Use PowerShell to Reset the Visual Studio Experimental Instance
I’ve created a simple script to reset the Visual Studio instance, available here. It takes two parameters, -Version and -InstanceName (which matches the “RootSuffix” parameter used … most of the time). You needn’t run it from a Developer Command Prompt, it grabs the install locations from the registry.
Some Useful Bits to Remember
Visual Studio Version Mapping and .Net Framework
Marketing Version | Actual Version | Framework Versions |
---|---|---|
2010 | 10.0 | 4.0 |
2012 | 11.0 | 4.5.2 |
2013 | 12.0 | 4.5.2 |
2015 | 14.0 | 4.6 |
2017 | 15.0 | 4.6.2 |
Default Visual Studio Paths
For these defaults, I’m assuming you’re on a 64-bit operating system. If you’re still stuck banging rocks together on a 32-bit OS, just knock out the (x86) where you see it.
Visual Studio 2010 - 2015
The paths for these versions have been pretty predictable. They start in %ProgramFiles(x86)%
, which usually maps toC:\Program Files (x86)
and are stored in Microsoft Visual Studio 1x.x
where x corresponds to one of version numbers in the Actual column.
Install Root:
"${env:ProgramFiles(x86)}\Microsoft Visual Studio 1x.x"
… or if you prefer cmd.exe
:
"%ProgramFiles(x86)%\Microsoft Visual Studio 1x.x"
Visual Studio 2017
Things were reorganized a little bit with Visual Studio 2017. The install root is now located at:
"${env:ProgramFiles(x86)}\Microsoft Visual Studio\2017\<Edition>"
Where <Edition> is going to correspond to the edition, Community, Professional or Enterprise.
In addition, the RootSuffix
, at least on my machine, is only part of the suffix name. This is a fact that Visual Studio understands, but the tool for creating/managing the experimental instances from the command prompt does not.
The PowerShell script provided above will provide you with experimental instance names if you attempt to reset one that doesn’t exist (as would happen if you provided Exp
but the name was actually _70a4f204Exp
Refresh the Experimental Instance with the Script
Basic help can be found by typing Get-Help ResetExperimentalInstance.ps1 -Full
, but here’s how you use it:
.\ResetExperimentalInstance.ps1 [-InstanceName] <InstanceName> [-Version <Version>]
Version - Optional - If you have only one version of Visual Studio installed. Note that this includes applications that use other versions of Visual Studio, like SQL Management Studio and System Center Configuration Manager’s management tools. If you have more than one version installed, the script will exit but will print the versions that are available.
InstanceName - Required - Usually the same as what is provided as the /RootSuffix
parameter in the Debug panel within Visual Studio for your extension. However, it may actually be _[some 32-bit Hex][RootSuffix]
, i.e. _71af83c4Exp
for the Exp
instance. If a corresponding folder for that instance is not found, you’ll be given a list of all of the instances that are found for the provided version and prompted as to whether or not you want to create a new experimental instance.
The _
in the long name is required for the Visual Studio provided tool, CreateExpInstance.exe
, which the script uses. However, the script will look for a folder that only differs by the starting _
and will correct your InstanceName if that’s the only difference.