God Wants Only Your Good, Even When It Doesn’t Feel That Way

One thing that has surprised me as a parent is how often interactions between my kids and myself mirror my relationship with God. Parenthood really gives you a unique opportunity to be on the flip side of the interaction, where you’re now the one trying to work for the good of someone who you love deeply but has no ability to understand the reasons behind your actions.

This weekend we were unexpectedly provided with the opportunity to see this play out in an even more potent form than normal. As a family, we’ve become very active in (nay, addicted to) Pokemon Go. There was a special event, so we charged up everyone’s phones and tablets and headed out to catch them all. It was a great day, things were going smoothly. We’d just stopped at a drink machine to get a little refreshment and were headed back up the hallway of a local church to meet my sister-in-law. A few steps is all it took. My four-year-old son, Caleb, took off running to close the ten feet or so between him and mom. And then he fell. Straight down, with no time to react, he landed directly on his face with both hands to the sides. Screaming ensued, as did parental panic, and Pokemon was over. One of his front teeth was clearly knocked back significantly.

Thankfully our pediatric dentist office has someone on call for emergencies (we didn’t even know they did that), so after describing the situation to the dentist on the phone we headed over there. Long story short, there are two options and the one which makes the most sense is pulling the affected tooth. Caleb wails like he’s getting a root canal when they just clean his teeth, so we knew we were in for fun. We tried to talk it up as much as possible to sooth his fears, but he knew something was up. We hated the thought of it, but still, it had to be done. As we laid him back he was of course scared, and the dentist applied some stuff to numb the area. I kept asking him to look into my eyes, forcing myself to smile and telling him I loved him. I tried asking him about his favorite Pokemon, though he was having none of that. We watched as the dentist pulled a large needle around his back where Caleb couldn’t see it (have I ever mentioned I hate needles?). Still I kept my focus on Caleb’s eyes, both of us assuring him that we loved him and that the dentist was helping make him better. The pediatric dentist was awesome and applied the needle without Caleb seeing it. It didn’t hurt him, but he could feel something was being done so his fear and panic rose. Still we held him, stroked his hand, and told him how this was going to make him better, and how much we loved him.  Finally the moment of truth came. The dentist maneuvered the pliers into place, again very slyly. Seeing what was coming, we told him one last time the dentist only wanted to help him, that we loved him, and that he would be okay. As before, Caleb was numbed but could tell something was happening, and let out a scream the likes of which you’ve never heard (and that dentist will never forget). A few seconds later, which seemed to last an eternity, the tooth was out and Caleb was sitting up in mommy’s arms. We told him it was all over and that now he’d be better and able to eat.

I walked away to check on our eldest, who was in the playroom waiting, and I was physically sick. In fact both me and Amanda were, but we forced it down in order to be there for Caleb. He will likely never know how we felt during that half hour and thereafter. He doesn’t understand how emotionally wrenching it was to make a decision for his health, knowing that it would mean physical pain in the immediate and probably emotional pain long-term as it takes years for the adult tooth to grow in (kids can be cruel). He can’t know how much it hurt us to hold him down, forcing him to participate in something he was scared of and didn’t understand. He didn’t have insight into how nauseous and light-headed we felt actually watching what was going on in his mouth (that he couldn’t see), and wishing there was any other way. Within five minutes he was up and playing, no noticeable difference other than gauze hanging out of his mouth. He was even running to get toy tokens for his suffering, as we begged him to be careful and avoid any further accidents. But two days later I still see his bloody mouth when I close my eyes. I hear his screams and my heart breaks all over. I know without a doubt that we made the right decision, and seeing his recovery and happy play confirms that. But I will always be plagued by dad-guilt for having to put him through it and participate in it. My wife had trouble sleeping that night, and every once in a while we just stop and talk about how horrible it was.

And so it is with God. I know times that I’ve been in pain, facing situations that seemed completely unnecessary. There was no discernible purpose or uplifting silver lining. It just hurt, plain and simple. I saw in the Bible and heard from Church that God loved me, that He wanted the best for me, that it was all for my good. But I felt how Caleb must have lying there, looking through his teary eyes at his dad saying the same thing.  “You say that, but I don’t believe you. If you really loved me you’d put a stop this.” But to truly care for Caleb I had to see him healed, and God also allows for lesser pains in our lives in order to accomplish the saving and healing of our souls. After we got home I took Caleb to the side and assured him that mommy and I love him and Carson more than anything in this world, and that we would never do anything that wasn’t for his good. God does the same for you. No matter the hurt, no matter how cruel you think it is that he’d want you to give up some part of your life, whether you understand what’s going on or not, He loves you in a deep and tremendous way. He hurts with all of your hurts, and I’m sure He longs for the day when you can see all the intricacies behind what seems at the moment like simple indifference. Even greater than my pain of watching Caleb lose a tooth, He watched His Son die in order to accomplish your salvation. Just as Jesus Christ trusted God the Father in that trial, being fully convinced of His love and plan to work for good, we can trust God in the same way and walk forward in confidence knowing that our situation is in His loving hands, no matter how we feel.

Kindle Fire HD8 Review

Background: I recently found myself wanting to replace my iPad Mini 2. It’s around four years old and starting to become sluggish enough to be frustrating. Given the incredibly low price of Amazon tablets on Prime Day, I decided to take a risk and try out the Kindle Fire HD 8. Below are my impressions after using it for a couple of weeks.

The Good

The Price. Being prime day, I was able to get the tablet for $50. Throw in a 64gb SD card plus a cover and altogether the total was around $80, which is the normal selling price of the tablet. Even if it weren’t on sale, you have to consider that to replace my old iPad with the current equivalent it was going to cost me $300, and that is without expanded storage or any accessories. Right away I’m feeling good about this purchase because of the low initial investment.

The Support. Amazon has now retired the “Mayday” feature, but they still have excellent support built right in. I was having a minor issue where custom playlists were not showing up in Amazon Music. You simply go to the Help app and from there you’re able to request assistance by either email or phone under the “Contact Us” section. The representative contacts you, so you don’t have to wait on hold, and helps you with whatever issue you may have. I’ll admit that at first it sounded like a very low-level call center tech, but nonetheless he was able to resolve my issue quickly. This seems like a great feature for the not-so-tech-savvy folks you may want to gift this to.

The integration. One of the reasons I was willing to take a risk on a Kindle tablet is that we’ve become pretty big users of the Amazon ecosystem. I listen to Amazon music frequently. My family watches Amazon Video. And of course we do a ton of shopping on Amazon (who needs Walmart parking lots, am I right?). Being an Amazon product, all of these are first class citizens on the Kindle Fire. Not to mention Alexa, who is quickly becoming like a family friend around our house.

The Hardware/Performance. For $50, there is respectable hardware in this device. The screen is crisp and clear, the apps run well (mostly), and moving around the tablet is smooth (mostly). More on the mostlies in a moment. Also, coming from a completely closed-off iPad, having the option to expand storage with an SD card was a very welcome feature.

The Bad

The Apps (or lack thereof). Number one, chief issue with Amazon tablets is the lack of apps. There’s just no way to spin it. You won’t find any of the Google products you likely rely on, like Youtube. Microsoft ones are hit and miss (Outlook but no OneNote). There are many popular ones that are present, like Facebook, but easily twice as many that aren’t. You can mitigate this by installing the Google Play Store, or using sites like APKMirror. But, (a) this requires a higher technical skillset than many users are comfortable with and (b) it potentially opens you up to vulnerabilities by bypassing the Amazon app store (you have to enable the installation of apps from unknown sources). Where you fall on the techy spectrum and your views on convenience vs customization will affect how much of an issue this is for you. I found it workable but annoying.

The Operating System. Amazon’s Fire OS is really just a modified version of Android, and it’s a complete mess. Forgive me if I sound biased coming from a mostly iOS background, but stepping into Android feels convoluted and disjointed. Don’t get me wrong, there are things about it that I grew to like. But overall I still prefer iOS by far. This is not only because of the greater consistency and aesthetic appeal, but also for security and privacy reasons. Being Android at its heart, Fire OS is victim to all the same issues Android has (ie I’ve never had to install antivirus on a tablet before). I do, however, feel that privacy is more in the users’ hands with the Fire tablets than those completely pre-stocked with Google’s apps and framework.

The Interface. Jumping off of the OS point, the custom interface of Fire OS leaves a lot to be desired. In fact, it would actually be much better if they just left it at stock Android instead of adding their own customization. I realize that much of the intent is to focus you in on Amazon content. (That is, after all, why these are so cheap. They want it to be a gateway to Amazon services.) I would argue, however, that their confusing interface actually makes this more difficult. Want to watch something on Prime Video? You go to the Video tab, right? Wrong. That tab will advertise videos to you, but it doesn’t list your watchlist, etc. I found it much easier to simply go to the actual Prime Video app, which felt more full-featured and more readily presented what I was looking for. In fact, I moved it and everything possible to the Home page so that I could avoid flipping through the various tabs. They aren’t at all customizable, and after a short time became something I avoided completely. Part Android/part Fire OS issue, I always felt like there were multiple ways of accessing similar things and rarely clear rules as to which should be used. On a less important note, there are a litany of small UI issues that are more preference than anything else (ie I still don’t know how to copy/paste correctly).

The Performance. Before I comment on this, let me remind you that this is a $50 tablet. That being said, if you’ve used tablets of a higher caliber then there is a certain level of responsiveness you’ve become accustomed to, even without realizing it. I had rosy eyes going into this experience due to price, Amazon integration, and some of the other points mentioned above. This area was the smelling salts, as it were, that awoke me to reality. Remember how I said “mostly” in the good performance section? When you first turn on the device things are very smooth, surprisingly so in fact. However, as you begin to install apps and put it through its paces that experience quickly withers. It doesn’t become unusable, but noticeably less smooth. My biggest irritation was when exiting apps back to the Home screen. There would be a delay in the icons appearing on the screen. This may sound like a small deal when you read through this, but think about how many times you perform that action throughout the course of using a tablet. Overall this leads to a noticeable amount of lag that is consistently presented to you. Also, in many apps there was a surprising amount of choppiness. One of the reasons I wanted something newer was so that games and such would perform better. However, when I went back and compared the Fire HD 8 to my 4-year-old iPad mini, it was actually performing worse. Hearthstone had run, albeit not perfectly, on my iPad but was almost unusable on the Kindle. Even simpler games like Candy Crush were annoyingly laggy on the Kindle but ran smoothly on the older iPad. Not what you’re looking for in a new device experience.

Conclusions

So, what does all of this mean? Do I think the Kindle Fire HD 8 is a good tablet? Yes. Do I think it’s one for me? No. In fact, I’ve gone back to using my iPad Mini. I’ll likely save up and buy a newer Mini to replace it. Why? Mostly ecosystem, experience, and apps. If you’ve used a tablet that performs well then using a laggy one feels like going backwards. Also, on iOS I have access to the ecosystem that all my other devices use as well as a rich app store. Whether you’re invested in either the Apple or Google ecosystems, you’re going to struggle adjusting to Amazon’s app selection.

That being sad, this doesn’t mean the Kindle isn’t a great device for others. I think it would be a fantastic device for someone who (a) is buying a tablet for the first time and doesn’t have any previous expectations and investments into other ecosystems or (b) simply wants to consume Amazon services. It’s also great if you want something cheap to get beat up. Full disclosure, we have two of the cheaper Kindle Fire 7 tablets that my kids use. For simple children’s games, etc, they’re just right. I’ll likely save this one for when one of theirs dies and let it be a nice upgrade for them.

Could I make the Kindle Fire work? Yes, but I prefer the iPad. And sometimes preference is all it comes down to.

List the Manager Emails for AD Group Members with PowerShell

Below is a quick script I’ve been working on for a colleague. Its simple purpose is to query an AD group then display each member’s username and the email address of their manager. Hopefully this will help others accomplish this common task with ease.

(Update 04/11/2018)

I modified the script to include the employee’s full name and also to sort the output by that name. Additionally, the way I was handling the variable members and naming bothered me. I knew it could be better. So I did some more research and cleaned that up utilizing the -ExpandProperty option for select.

 

Getting Started with PowerShell Desired State Configuration (DSC)

As I’ve mentioned in other posts, Desired State Configuration (DSC) is a powerful technology with a lot of potential. However, due to how new it is and how rapidly it’s evolving, it can be difficult to get started and figure out how to accomplish your specific goals. My intention here is not to give an exhaustive look at the ins and outs of DSC (I’m not qualified to do that), but rather to give you the tools to get started and be successful with it .

Step 1: Get a Baseline

After years of cobbling together information, then having to go back later and relearn how to do it the right way, I’ve learned the value of getting the framework right from the start. If you have a strong framework in mind for how something is built and how it’s intended to be used, then building useful things on top of it is much easier. I had the same experience when learning DSC. There is a lot of information on the internet about it, but much of it is “old” and out of date. Also, everyone has their own opinions for how it should be used. Eventually I came across two videos on Microsoft Virtual Academy that put everything into perspective. They are taught by “Jeffrey Snover, Microsoft Distinguished Engineer and inventor of PowerShell,  along with Windows PowerShell MVP Jason Helmick”. This is a fantastic starting point for diving into DSC and I can’t recommend them enough.

Getting Started with PowerShell Desired State Configuration (DSC) – Microsoft Virtual Academy

Advanced PowerShell Desired State Configuration (DSC) and Custom Resources – Microsoft Virtual Academy

Step 2: Working With New Resources

Now that you know all about DSC and how it can be used, it’s time to put that knowledge to work. You eagerly download the module you want to use (let’s say xSQLServer, for example) and are ready to have machines install SQL for you. The first inclination is probably to google it to see how it’s used, which will lead you to github or PowerShell Gallery. Those are great for getting information about the package and its change tracking, but not much use for actually implementing the module. So here you are with a brand new toy and no manual.

Examples

The first thing to do with a new module is always check for an examples folder. Your module was probably installed in C:\Program Files\WindowsPowerShell\Modules.

Opening the examples folder within that module will reveal a list of scripts made by the creating team for various scenarios they see the module being used in. Your mileage may vary depending on the module you’re using and who made it, but generally those produced by Microsoft have useful information. This is where I obtained the example file that my last post, “Installing SQL Server Using Desired State Configuration“, was based on. Again, how much explanation is included within each script is completely up to the discretion of the creator. That’s why it’s important to first watch the videos linked above. Then, whether there is proper documentation or not you can make sense of it yourself.

(Upon further inspection, the examples are sometimes also available on the github site.)

Interrogating Resources

Even when you have an example file that closely matches your needs it’s likely that you will still want to customize it. Many times, the module you are working with will have a resource you need but not an example of it listed in the file. Or you simply want to know what all you can do with the module. As usual, it’s PowerShell to the rescue.

Using PowerShell we can easily find which resources are available to us in a module using Get-DscResource.

After finding a resource that interests us, we can dig further down into its specifics. In order to get more than cursory information, it s necessary to expand the properties field.

From this information we can tell which fields are available to us, what their data types are, and which ones are required.

It should also be noted that you can discover these from right within ISE as well. This is the improvised method that I used before discovering the PowerShell cmdlets above. If you type anything within the resource block that it doesn’t recognize, intellisense will automatically suggest the proper fields to use by hovering over the angry red line.

And, if all else fails, in the end resources are just PowerShell scripts. You can go to their folder and open them like any other file (ie C:\Program Files\WindowsPowerShell\Modules\xSQLServer\DSCResources\MSFT_xSQLServerLogin).

Step 3: Be Brave

Armed with all of this knowledge there is but one thing to do, be brave. Start putting some configurations together, make mistakes, then use the lessons learned to make better configurations. This is an exciting technology in which things are rapidly moving and changing. In fact, within days of my last post (and while writing this one) I discovered that xSQLServer had been retired in favor of SqlServerDsc, and I’d had no idea.

So get at it, make your own creations, keep your eyes open daily for changes, and let me know if you have any questions. I look forward to learning with you.

Installing SQL Server Using Desired State Configuration

(Update: I’ve since discovered that SqlServerDsc has replaced xSQLServer.)

One of my growing passions is using PowerShell Desired State Configuration (DSC) to automate all the things. I started out with simple configurations for testing but wanted to dive into more complex\useful situations for my day-to-day DBA life. To be honest, I was intimidated by the idea of doing a SQL installation. Configuring simple parameters or creating a directory are easy enough to wrap my head around, but something as complex as a DBMS installation gave me pause. I’m here to tell you that my worries were unfounded, and that you should have none as well.

The blessing and curse of DSC is that it’s so new. It is without doubt a very powerful tool, but as of yet there isn’t a lot of documentation around the individual resources. Or worse yet, the pace of improvement moves so quickly that information from two years ago is now out of date. I plan on doing a separate post for how to approach DSC given these realities. With this post, however, I wanted to fill one of those documentation gaps. Specifically, how to install and configure an instance of SQL server. I based my file off of an example one provided by Microsoft in the “Examples” folder of the xSQLServer module named “SQLPush_SingleServer.ps1”. Pro tip: always look for example folders in the modules you want to work with. It should be noted that you can address much more complicated scenarios, such as setting up clusters or Availability Groups, but for simplicity this configuration will be creating a single instance on one node.

If you have experience with DSC or simply don’t want to listen to me drone on about the details, the full configuration is at the bottom. For those interested in the play by play, or just bored and looking for something to do, I’ll address each piece individually.

The script starts out with compulsory documentation on the script and what it does. Kidding aside, get into the habit of doing small sections like this. Your coworkers (and you years from now when you’ve forgotten what you did) will thank you.

Next, we hard-code a couple of items specific to your individual run of the script. List the computer(s) that you want to deploy to as well as a local path for the configuration file that DSC will create.

Following that, we will set how the Local Configuration Manager on the target nodes is to behave. We’re specifying that the configuration is being pushed to it, that it should automatically check every so often for compliance to this configuration and auto-correct anything that’s not aligned, that modules on the node can be overwritten, and that it can reboot if needed.

Following that is the actual configuration details, where all the fun is defined. Mine is named “SQLSA”, but it really doesn’t matter what you name it. This is like defining a function; so as long as you call it by that same name later, little else is relevant. You’ll see at the top of this section there are three “Import-DscResource” lines. This tells the configuration which DSC modules will be needed to perform the actions we’re requesting.

The WindowsFeature item is one of the most handy in DSC. This allows us, as you might guess, to install Windows Features (in this case the .NET Framework).

Next I’ve created a firewall rule to make sure our instance’s port will be open (this is defined later under xSQLServerNetwork). It’s worth noting that there is a resource built into xSQLServer that allows you to configure firewall rules for SQL. However, I did not like the behavior of it and found that xFirewall from the module xNetworking provided a lot more flexibility.

Up next is the actual meat of installing SQL Server. The if($Node.Features) block is something I picked up from the example file. I’d say it’s redundant to check for whether you’re installing SQL when you came here to install SQL, but hey, it works well so I left it.

One way I’ve altered this section from the original is to parameterize everything. If you look further down there is a $ConfigurationData section. Having all of our customizable fields there allows us to easily change them for each deployment (dev, test, prod) without having to search through the code. You and your team will know exactly where to go and what to change for each situation.

I’ve also included some examples of basic SQL Server tasks like creating a database, disabling the sa account, disabling a feature like xp_cmdshell, and configuring the network port (referenced earlier). The naming on these items looks odd but makes sense. By adding in the node name we can ensure that they are unique should we deploy to more than one target node. And adding a friendly name to the configuration item, like “sa”, makes it easy to tell DSC which item depends on which. Speaking of which, note that each of the configurations depends on the base installation. That way DSC will not run those if there is nothing to actually configure.

After the configuration definition we have the $ConfigurationData mentioned earlier. It’s a great idea to get in the habit of using sections like this. It will make your transition between various environments much easier.

The next section details what we’d like the instance name to be as well as what features should be installed. It’s very picky about the feature names, and they don’t line up exactly with a standard command line install. So be careful with what you place here. It won’t install anything incorrectly, just simply cause the configuration not to run and you to lose your mind.

Also in this section, we’re copying over the modules that each node will need in order to perform this configuration. This isn’t necessary when using DSC in pull mode, but that’s a story for a different post.

I know you thought it’d never come, but at last it’s time to actually do something with all of this. We call our “SQLSA” configuration, passing in the $ConfigurationData and specifying to place the resulting .mof file in $OutputPath. After that, configuration is started on each node using Start-DscConfiguration and calling the .mof that was just created. Lastly, the node is tested to make sure it’s not out of compliance.

If all goes well, your output will lack red and eventually will end in a message stating that the configuration tests as “True”.

 

And that’s all there is to it! Not so scary after all. I deployed my first DSC SQL Server while making tea and wondered why I’d been doing it any other way…

 

 

My First PowerShell Module – PSOraenv

Most of my experience with Oracle has been on Linux, but recently I began working with it on Windows as well. It came to my attention very quickly that oraenv, my beloved friend in Oracle administration, is not present on Windows installations. When you have more than one database installed on a single server, or perhaps ASM (which has its own Oracle home), manually swapping back and forth to work with the different pieces gets really old really fast. For this reason, and because I’ve been anxious to learn how to make my own PowerShell modules, I created PSOraenv. Its purpose is to closely mirror the capabilities of oraenv and allow a similar level of simple, yet powerful, command-line flexibility on Windows systems. Let me give you some examples.

By default, the necessary environment variables for Oracle aren’t set when you fire up PowerShell or Command Prompt. (It seems to default to the most recently installed product.) Using my cmdlet Get-OraEnv, we can verify that they start empty.

In order to see what options are available to us, we run my cmdlet Get-OraSID to pull back a list of the database SID’s and associated Oracle homes on the local machine.

Now that we know what SID’s are available to us, my cmdlet Set-OraEnv can be used.

Now, if we run Get-OraEnv again, we can see that the environment variables have indeed been set.

And, just to prove that actually matters, sqlplus can verify which database we are currently working with. (My sandbox system defaults to lkftest2.)

And there you have it! We have easily viewed which databases are on the local server and quickly swapped to the one that’s needed. It’s just as easy to swap back to the lkfowler2 SID if needed.

This module can be installed from PowerShell Gallery using Install-Module, like the example below.

(You can also find the PowerShell Gallery page at https://www.powershellgallery.com/packages/PSOraenv/1.0)

If you’re interested to see what’s in the code, that has been placed below. I welcome any comments, feedback, and functionality requests. Hope you enjoy!

 

Windows Server Core Jumpstart

Recently I’ve been looking into the potential that Windows Server Core holds for our environment. Like most eager new Core users, I imagine, I jumped in with grand visions of spinning up a VM quickly and being off to the races administering it from my desktop. The reality wasn’t quite the same, as I ran into a chicken and egg situation wondering how I could set up the machine when I could not yet connect to it. To complicate the issue, I couldn’t find a concise list of information on exactly what is needed to simply make the machine available so that I could begin to work with it.

With that in mind, I’ve compiled the following information in hopes of saving others the same headache. There’s nothing earth shattering here, but hopefully it will allow people to get started with Server Core quickly so that they can move on to more important things, like how the server will actually be used.

Let me know if you have any questions or suggestions. Hope it’s a help to you.

  • Ports to request from your firewall team.
    •  TCP
      • 5985, 5986 (WinRM)
      • 445 (SMB) –This is up to you. I wanted to be able to move files to/from the server.
      • 135
  • Local firewall rules to allow remote administration.
    • Enable Remote Management groups
      (Note: If you enable “Remote Service Management” on the host first, then you can do the others via PowerShell remoting. This can be helpful since copy/paste in things like VMWare console doesn’t always work.)

Default outbound traffic to allow

Enable Ping (optional)

  • Remote management tools
    • Add the remote computer to Server Manager (available on Windows desktop and server versions).
      • Once added, you can easily launch Computer Management and PowerShell for that specific machine by right-clicking it.
    • Connect via PowerShell remoting.

      • Cross-domain PowerShell Remoting (ie Dev or Test domains)
        • If remoting isn’t enabled on your local machine, enable it.
        • Add machines to the TrustedHosts list. (Depending on your setup, you might have to substitute IP addresses for the machine names in -Value.)

          Verify with:
        • Use PSSession to connect
      • IIS management (run on remote machine)
        • Set HKLM\SOFTWARE\Microsoft\WebManagement\Server\EnableRemoteManagement to 1.
          (This can be achieved using the local regedit tool and connecting it to the remote machine.)
        • Restart the WMSVC service.
        • Connect from local IIS Manager for Remote Administration with the local administrator credentials of the remote machine.
      • You can either use sconfig or the following remote PowerShell commands to allow Remote Desktop. (This is especially helpful for quickly getting to sconfig and other commands that do not operate properly with remote PowerShell.)
  • Common configuration tasks
    • The utility “sconfig” can be used for most setup items.
    • For a more speedy and scriptable setup, below are some common configurations via PowerShell.
      • Change date\time
      • Change computer name
      • Add to the domain

         

 

The Final Push – CRM 2016 is Live

After months of ups and downs, plans built and destroyed, and countless hours invested, CRM 2016 is live. (Actually, it’s been live since Easter weekend. But keeping a blog up to date is obviously not one of my talents.) For the last installment in this series I’d like to go over how we went about the upgrade and highlight some successes. Before I get into that, though, there is one essential thing that needs to be understood about this entire project. From the moment we really got started, it was placed in the hands of Jesus Christ. From planning to implementation, and everything in between, it was submitted to Him. I don’t say that to be cheesy or legalistic, but rather to give credit where credit is due.

As I’ve mentioned before, we were attempting to go from our production Dynamics CRM 2011 environment all the way to the latest Dynamics CRM 2016. That environment has a database that exceeds 1.5TB and numerous integration points with systems like our dialer. During planning we reached out to a vendor, who shall remain nameless, for their advice. The overview of their plan looked something like this:

Following this approach, the migration would flow as such:

  • Shut off the PROD 2011 servers.
  • Back up the PROD 2011 organization database.
  • Restore the PROD 2011 database to the temporary CRM 2013 environment.
  • Import the PROD 2011 database to CRM 2013.
  • Back up the upgraded CRM 2013 database.
  • Restore the upgraded CRM 2013 database to the temporary CRM 2015 environment.
  • Import the upgraded CRM 2013 database to CRM 2015.
  • Back up the upgraded CRM 2015 database.
  • Restore the upgraded CRM 2015 database to the temporary CRM 2016 environment.
  • Import the upgraded CRM 15 database to CRM 2016.
  • Back up the upgraded CRM 2016 database.
  • Restore the upgraded CRM 2016 database to PROD 2016.
  • Import the upgraded CRM 2016 database to PROD 2016.

We built out the “hop” environments as this plan suggested and began to test the theory. Aside from multiple technical challenges, some of which I’ve addressed in other posts, even when it worked the timeline was simply too long. End to end it took 1.5 weeks or more, and that’s not counting any additional steps such as importing our customizations or the work for our data warehouse. Taking down a tier zero application for one to two weeks was NOT an option for us.

The more I thought over the proposed plan, our past CRM experiences, and the options we had available to us, a new plan came into view. I wasn’t sure it would even work, but decided to do a “mock migration”, if you will, and test it out. The new approach looked like this:

The smaller database symbols represent the config database for each temporary environment. The temporary 2016 servers were eliminated altogether since our final (PROD) hop was 2016 anyway. Basically, I just made the 2011 organization database a central upgrade spot on a new SQL instance and pointed all the app servers there.

Following this new plan, the work flow became:

  • Shut off the PROD 2011 servers.
  • Remove the PROD 2011 organization database from our PROD Availability Group.
  • Detach the PROD 2011 organization database and reattach it under the new PROD 2016 instance.
  • Import the PROD 2011 database to CRM 2013.
  • Remove the organization from CRM 2013.
  • Import the upgraded CRM 2013 database to CRM 2015.
  • Remove the organization from CRM 2015.
  • Import the upgraded CRM 2015 database to CRM 2016 (using PROD App servers).
  • Remove the other copy of the CRM 2011 database and sync the upgraded 2016 database to the other Availability Group node.

This had a profound affect on our timeline. Each of our backup and restore operations had cost us around six hours each, for a total cost of around 48 hours. By pointing all of the app nodes at one central SQL server, that was completely eliminated. It also allowed us to utilize the beefier PROD hardware. Instead of 1.5 weeks our tentative database upgrade timeline became 31-36 hours, very doable for a weekend maintenance window. And that was on slower storage that had been provisioned only for the mock migration tests. In the end our actual time for full migration was less than 24 hours. That includes not only the database upgrade portion but also importing customizations, changing our data warehouse structure, reconfiguring connected systems, and troubleshooting issues.

There was truly a fantastic team that worked on this project and I’m honored to have shared in the adventure with them. There is a great deal more work not detailed in these blog posts because I didn’t have direct insight into those portions. I’d like to call out a few of them at a high level though.

  • Our development team spent months updating old code, much of which predated their employment here. They also developed innovative new ways of approaching old problems and bringing everything into compliance with CRM 2016. With the amount of customization we have, this is more than commendable.
  • My co-upgrader, Matt Norris, put in many long hours and handled with stride any challenge thrown at him. He was new to CRM when we started but I now refer to him as “battle-hardened”.
  • My fellow DBA, Bret Unbehagen, single handedly worked out the difference in table structures between CRM 2011 and 2016, determined how that would affect streaming data to our Oracle data warehouse, and created a process to mitigate those issues for the go-live weekend. I can’t even begin to describe to you how impressive this is because I don’t even fully understand it myself.
  • Our networking and storage teams were indispensable in preparing for and carrying out the upgrade. I asked for terabytes upon terabytes of space for various tests and mock migrations, as well as innumerable firewall requests, and they delivered every time. The compute guys also gave me additional resources on the app server virtual machines, to which I credit much of our surprisingly small timeline.

It’s a blessing and a privilege to have been a part of such a successful effort. There is nothing more satisfying than doing good work to God’s glory.

If you have any questions about how we approached the upgrade, mock migrations, etc please feel free to ask. I’ll be glad to answer anything that doesn’t put my job at risk 😉

Technical Details

App Server Versions at Upgrade Time

  • 2013: 6.1.0001.0132
  • 2015: 7.0.0001.0129
  • 2016: 8.1.0000.0359

SQL Server Version

  • Microsoft SQL Server 2014 SP2

Dynamics CRM Install\Import “The SQL Server {sqlserver_name} is unavailable”

My apologies for the significant gap between my last post on our CRM 2016 Upgrade Adventure and this one. My time has been consumed with preparing for go-live, but I’ve been keeping track of the roadblocks and caveats we encounter as I go so that I can post about them at later dates.

One of the challenges with this deployment that we did not encounter with the last is a significant increase in firewall and context configuration. LU has, to their credit, made great efforts over the last several years to ensure our network is as secure as possible. With an increase in security, however, comes an increase in complexity.

While setting up CRM you might expect to open a port for the SQL instance (ie 1433). It might also occur to you that UDP 1434 should be opened for the SQL Browser Service. Now your app server has a clear line open to the SQL instance. Everything should be ready, so you go to create or import your organization only to encounter “The SQL Server {sqlserver_name} is unavailable”.

You might also encounter a message about not being able to verify that the SQL Agent is running. Being a thorough Sys Admin\DBA you check the SQL services for these and confirm both are up. You also use telnet or another utility to confirm that the ports are indeed open, so what on earth could CRM need in order to reach SQL?

TCP 445… that’s right. Because of the unique setup of CRM it requires TCP 445 to do any kind of setup. What is TCP 445 you ask? “Active Directory service required for Active Directory access and authentication.” (https://technet.microsoft.com/en-us/library/hh699823.aspx). Why an app server would need an AD authentication port opened to the SQL server is anybody’s guess, but it cleared our issue right up. All system checks passed and it happily imported our database.

It should be noted, if you’re using an Availability Group setup then this port will need to be opened to the other servers in the AG as well. I have had the most success when opening it to the AG listener name as well as all nodes.

Bonus Round

If none of this helps you, here are some other things I’ve found are necessary to appease the install\import wizards.

  • Make sure you’re in the local Administrators group on the app servers as well as every node in the SQL cluster or AG (added explicitly, not through a group) .
  • Make sure your account has the sysadmin role on the SQL instance.
  • Specify the SQL server name using the backslash notation, even if the AG name doesn’t contain it. For instance, if your AG is normally accessed as SQLAGINSTANCE,50000 you would use SQLAGINSTANCE\SQLAGINSTANCE,50000 in the wizard. It seems to be hard-coded to only accept it in that manner.

Dynamics CRM Import Fails on “Upgrade Indexes”

As I mentioned in the last post, I’m taking you through our adventure in upgrading the existing on-premise Dynamics CRM 2011 environment to 2016 (and eventually 2016 cloud). Previously I discussed the first show-stopper error we received, “Must declare the scalar variable “@table”.” Following that resolution the import continued past the stage “Metadata xml upgrade: pass 1” but then failed at “Upgrade Indexes”.

Through the use of trace logs obtained by using the CRM diagnostic tool, we discovered that the import wizard was marking a number of indexes to be dropped and then recreated. However, as observed through a SQL Profiler deadlock trace, it was trying to drop and add indexes on the same table at the same time. As I mentioned in my previous post, our database is in excess of 1.5TB. One of the largest tables is ActivityPointerBase, and it’s also one on which many index operations were being executed by the import wizard. The result is that some of the index operations would be chosen as the deadlock victim, causing the import wizard to throw an error and exit. Also, if you restarted the import it would process the entire list again, not taking into account any that it had dropped and recreated already.

My coworker, and local wizard, Bret Unbehagen used the trace logs to determine which tables the import wizard was using to store its index operation information. He then created the query below to produce a list of indexes that it was trying to recreate as well as generate a drop statement for each of those.

So, the basic workflow is 1) let the import wizard proceed until it fails at “Upgrade Indexes”, 2) run the script above against your organization database to list the indexes that it wants to rebuild, 3) use the generated drop statements to preemptively drop those indexes, 4) restart the import so that it continues were it left off (feature of 2013+).

In our experience, this allowed the import wizard to continue through the “Upgrade Indexes” section without deadlocking and proceed with the import. Hopefully it can help you achieve success as well. If you have any questions please feel free to comment. Also, if you’d like to see more from Bret his information is listed below.

Bret Unbehagen (Twitter: @dbaunbe; Web: dba.unbe.org)