Windows Server Core Jumpstart

Recently I’ve been looking into the potential that Windows Server Core holds for our environment. Like most eager new Core users, I imagine, I jumped in with grand visions of spinning up a VM quickly and being off to the races administering it from my desktop. The reality wasn’t quite the same, as I ran into a chicken and egg situation wondering how I could set up the machine when I could not yet connect to it. To complicate the issue, I couldn’t find a concise list of information on exactly what is needed to simply make the machine available so that I could begin to work with it.

With that in mind, I’ve compiled the following information in hopes of saving others the same headache. There’s nothing earth shattering here, but hopefully it will allow people to get started with Server Core quickly so that they can move on to more important things, like how the server will actually be used.

Let me know if you have any questions or suggestions. Hope it’s a help to you.

  • Ports to request from your firewall team.
    •  TCP
      • 5985, 5986 (WinRM)
      • 445 (SMB) –This is up to you. I wanted to be able to move files to/from the server.
      • 135
  • Local firewall rules to allow remote administration.
    • Enable Remote Management groups
      (Note: If you enable “Remote Service Management” on the host first, then you can do the others via PowerShell remoting. This can be helpful since copy/paste in things like VMWare console doesn’t always work.)

Default outbound traffic to allow

Enable Ping (optional)

  • Remote management tools
    • Add the remote computer to Server Manager (available on Windows desktop and server versions).
      • Once added, you can easily launch Computer Management and PowerShell for that specific machine by right-clicking it.
    • Connect via PowerShell remoting.

      • Cross-domain PowerShell Remoting (ie Dev or Test domains)
        • If remoting isn’t enabled on your local machine, enable it.
        • Add machines to the TrustedHosts list. (Depending on your setup, you might have to substitute IP addresses for the machine names in -Value.)

          Verify with:
        • Use PSSession to connect
      • IIS management (run on remote machine)
        • Set HKLM\SOFTWARE\Microsoft\WebManagement\Server\EnableRemoteManagement to 1.
          (This can be achieved using the local regedit tool and connecting it to the remote machine.)
        • Restart the WMSVC service.
        • Connect from local IIS Manager for Remote Administration with the local administrator credentials of the remote machine.
      • You can either use sconfig or the following remote PowerShell commands to allow Remote Desktop. (This is especially helpful for quickly getting to sconfig and other commands that do not operate properly with remote PowerShell.)
  • Common configuration tasks
    • The utility “sconfig” can be used for most setup items.
    • For a more speedy and scriptable setup, below are some common configurations via PowerShell.
      • Change date\time
      • Change computer name
      • Add to the domain

         

 

The Final Push – CRM 2016 is Live

After months of ups and downs, plans built and destroyed, and countless hours invested, CRM 2016 is live. (Actually, it’s been live since Easter weekend. But keeping a blog up to date is obviously not one of my talents.) For the last installment in this series I’d like to go over how we went about the upgrade and highlight some successes. Before I get into that, though, there is one essential thing that needs to be understood about this entire project. From the moment we really got started, it was placed in the hands of Jesus Christ. From planning to implementation, and everything in between, it was submitted to Him. I don’t say that to be cheesy or legalistic, but rather to give credit where credit is due.

As I’ve mentioned before, we were attempting to go from our production Dynamics CRM 2011 environment all the way to the latest Dynamics CRM 2016. That environment has a database that exceeds 1.5TB and numerous integration points with systems like our dialer. During planning we reached out to a vendor, who shall remain nameless, for their advice. The overview of their plan looked something like this:

Following this approach, the migration would flow as such:

  • Shut off the PROD 2011 servers.
  • Back up the PROD 2011 organization database.
  • Restore the PROD 2011 database to the temporary CRM 2013 environment.
  • Import the PROD 2011 database to CRM 2013.
  • Back up the upgraded CRM 2013 database.
  • Restore the upgraded CRM 2013 database to the temporary CRM 2015 environment.
  • Import the upgraded CRM 2013 database to CRM 2015.
  • Back up the upgraded CRM 2015 database.
  • Restore the upgraded CRM 2015 database to the temporary CRM 2016 environment.
  • Import the upgraded CRM 15 database to CRM 2016.
  • Back up the upgraded CRM 2016 database.
  • Restore the upgraded CRM 2016 database to PROD 2016.
  • Import the upgraded CRM 2016 database to PROD 2016.

We built out the “hop” environments as this plan suggested and began to test the theory. Aside from multiple technical challenges, some of which I’ve addressed in other posts, even when it worked the timeline was simply too long. End to end it took 1.5 weeks or more, and that’s not counting any additional steps such as importing our customizations or the work for our data warehouse. Taking down a tier zero application for one to two weeks was NOT an option for us.

The more I thought over the proposed plan, our past CRM experiences, and the options we had available to us, a new plan came into view. I wasn’t sure it would even work, but decided to do a “mock migration”, if you will, and test it out. The new approach looked like this:

The smaller database symbols represent the config database for each temporary environment. The temporary 2016 servers were eliminated altogether since our final (PROD) hop was 2016 anyway. Basically, I just made the 2011 organization database a central upgrade spot on a new SQL instance and pointed all the app servers there.

Following this new plan, the work flow became:

  • Shut off the PROD 2011 servers.
  • Remove the PROD 2011 organization database from our PROD Availability Group.
  • Detach the PROD 2011 organization database and reattach it under the new PROD 2016 instance.
  • Import the PROD 2011 database to CRM 2013.
  • Remove the organization from CRM 2013.
  • Import the upgraded CRM 2013 database to CRM 2015.
  • Remove the organization from CRM 2015.
  • Import the upgraded CRM 2015 database to CRM 2016 (using PROD App servers).
  • Remove the other copy of the CRM 2011 database and sync the upgraded 2016 database to the other Availability Group node.

This had a profound affect on our timeline. Each of our backup and restore operations had cost us around six hours each, for a total cost of around 48 hours. By pointing all of the app nodes at one central SQL server, that was completely eliminated. It also allowed us to utilize the beefier PROD hardware. Instead of 1.5 weeks our tentative database upgrade timeline became 31-36 hours, very doable for a weekend maintenance window. And that was on slower storage that had been provisioned only for the mock migration tests. In the end our actual time for full migration was less than 24 hours. That includes not only the database upgrade portion but also importing customizations, changing our data warehouse structure, reconfiguring connected systems, and troubleshooting issues.

There was truly a fantastic team that worked on this project and I’m honored to have shared in the adventure with them. There is a great deal more work not detailed in these blog posts because I didn’t have direct insight into those portions. I’d like to call out a few of them at a high level though.

  • Our development team spent months updating old code, much of which predated their employment here. They also developed innovative new ways of approaching old problems and bringing everything into compliance with CRM 2016. With the amount of customization we have, this is more than commendable.
  • My co-upgrader, Matt Norris, put in many long hours and handled with stride any challenge thrown at him. He was new to CRM when we started but I now refer to him as “battle-hardened”.
  • My fellow DBA, Bret Unbehagen, single handedly worked out the difference in table structures between CRM 2011 and 2016, determined how that would affect streaming data to our Oracle data warehouse, and created a process to mitigate those issues for the go-live weekend. I can’t even begin to describe to you how impressive this is because I don’t even fully understand it myself.
  • Our networking and storage teams were indispensable in preparing for and carrying out the upgrade. I asked for terabytes upon terabytes of space for various tests and mock migrations, as well as innumerable firewall requests, and they delivered every time. The compute guys also gave me additional resources on the app server virtual machines, to which I credit much of our surprisingly small timeline.

It’s a blessing and a privilege to have been a part of such a successful effort. There is nothing more satisfying than doing good work to God’s glory.

If you have any questions about how we approached the upgrade, mock migrations, etc please feel free to ask. I’ll be glad to answer anything that doesn’t put my job at risk 😉

Technical Details

App Server Versions at Upgrade Time

  • 2013: 6.1.0001.0132
  • 2015: 7.0.0001.0129
  • 2016: 8.1.0000.0359

SQL Server Version

  • Microsoft SQL Server 2014 SP2

Dynamics CRM Install\Import “The SQL Server {sqlserver_name} is unavailable”

My apologies for the significant gap between my last post on our CRM 2016 Upgrade Adventure and this one. My time has been consumed with preparing for go-live, but I’ve been keeping track of the roadblocks and caveats we encounter as I go so that I can post about them at later dates.

One of the challenges with this deployment that we did not encounter with the last is a significant increase in firewall and context configuration. LU has, to their credit, made great efforts over the last several years to ensure our network is as secure as possible. With an increase in security, however, comes an increase in complexity.

While setting up CRM you might expect to open a port for the SQL instance (ie 1433). It might also occur to you that UDP 1434 should be opened for the SQL Browser Service. Now your app server has a clear line open to the SQL instance. Everything should be ready, so you go to create or import your organization only to encounter “The SQL Server {sqlserver_name} is unavailable”.

You might also encounter a message about not being able to verify that the SQL Agent is running. Being a thorough Sys Admin\DBA you check the SQL services for these and confirm both are up. You also use telnet or another utility to confirm that the ports are indeed open, so what on earth could CRM need in order to reach SQL?

TCP 445… that’s right. Because of the unique setup of CRM it requires TCP 445 to do any kind of setup. What is TCP 445 you ask? “Active Directory service required for Active Directory access and authentication.” (https://technet.microsoft.com/en-us/library/hh699823.aspx). Why an app server would need an AD authentication port opened to the SQL server is anybody’s guess, but it cleared our issue right up. All system checks passed and it happily imported our database.

It should be noted, if you’re using an Availability Group setup then this port will need to be opened to the other servers in the AG as well. I have had the most success when opening it to the AG listener name as well as all nodes.

Bonus Round

If none of this helps you, here are some other things I’ve found are necessary to appease the install\import wizards.

  • Make sure you’re in the local Administrators group on the app servers as well as every node in the SQL cluster or AG (added explicitly, not through a group) .
  • Make sure your account has the sysadmin role on the SQL instance.
  • Specify the SQL server name using the backslash notation, even if the AG name doesn’t contain it. For instance, if your AG is normally accessed as SQLAGINSTANCE,50000 you would use SQLAGINSTANCE\SQLAGINSTANCE,50000 in the wizard. It seems to be hard-coded to only accept it in that manner.

Dynamics CRM Import Fails on “Upgrade Indexes”

As I mentioned in the last post, I’m taking you through our adventure in upgrading the existing on-premise Dynamics CRM 2011 environment to 2016 (and eventually 2016 cloud). Previously I discussed the first show-stopper error we received, “Must declare the scalar variable “@table”.” Following that resolution the import continued past the stage “Metadata xml upgrade: pass 1” but then failed at “Upgrade Indexes”.

Through the use of trace logs obtained by using the CRM diagnostic tool, we discovered that the import wizard was marking a number of indexes to be dropped and then recreated. However, as observed through a SQL Profiler deadlock trace, it was trying to drop and add indexes on the same table at the same time. As I mentioned in my previous post, our database is in excess of 1.5TB. One of the largest tables is ActivityPointerBase, and it’s also one on which many index operations were being executed by the import wizard. The result is that some of the index operations would be chosen as the deadlock victim, causing the import wizard to throw an error and exit. Also, if you restarted the import it would process the entire list again, not taking into account any that it had dropped and recreated already.

My coworker, and local wizard, Bret Unbehagen used the trace logs to determine which tables the import wizard was using to store its index operation information. He then created the query below to produce a list of indexes that it was trying to recreate as well as generate a drop statement for each of those.

So, the basic workflow is 1) let the import wizard proceed until it fails at “Upgrade Indexes”, 2) run the script above against your organization database to list the indexes that it wants to rebuild, 3) use the generated drop statements to preemptively drop those indexes, 4) restart the import so that it continues were it left off (feature of 2013+).

In our experience, this allowed the import wizard to continue through the “Upgrade Indexes” section without deadlocking and proceed with the import. Hopefully it can help you achieve success as well. If you have any questions please feel free to comment. Also, if you’d like to see more from Bret his information is listed below.

Bret Unbehagen (Twitter: @dbaunbe; Web: dba.unbe.org)

Dynamics CRM Import Error “Must declare the scalar variable “@table”

This post is the first in a new series I’m going to call “CRM 2016 Upgrade Adventure”. Summary: my organization has taken on the ambitious challenge of not only upgrading our existing Dynamics CRM 2011 environment to the 2016 version but  of moving it to the cloud service as well. Aside from getting the vanilla components through three versions (2013, 2015, 2016) there are all of the custom integrations that have been tied into CRM over the years that must come along too. That is why it is neither a joke nor hyperbole when I label this as an adventure. We are only in the initial months of this effort and I promise you that plenty of adventure has already been had.

Our first headache… I mean, adventure… was encountered while importing the 2011 database to 2013.  (In order to get to 2016 you have to “hop” your database through 2013, 2015, and 2016 by importing it to each version.) Initially we encountered some messages about incompatible components from the CRM 4.0 days, which our database started out in. That was no surprise. The developers quickly updated our current system and we went to import again assuming the path was clear. What I encountered instead was almost instant failure. Within 20 minutes the import had failed. Knowing this was a process that should take several hours (our database exceeds 1.5TB), I took a look at the logs to see what the issue was. During the stage “Metadata xml upgrade: pass 1” the following was listed:

System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. —> System.Data.SqlClient.SqlException: Must declare the scalar variable “@table”.

I’m sure you can appreciate my confusion since this is a closed-source application written by Microsoft in which I have no option to declare variables. Googling only returned articles about people writing their own applications. Feeling we had no other recourse, we opened up a support ticket with Microsoft. That in itself was quite an adventure that spanned two weeks, but I’ll give you the short version. (Props to my coworker John Dalton for his endless hours on the phone with Microsoft through nights and weekends.) In the end the culprit was a trigger that was on our database, but not just any trigger. This one is named “tr_MScdc_ddl_event” and is created by Microsoft when you enable CDC on a database. After scripting out that trigger to make sure we could recreate it later, then dropping it, the import continued past “Metadata xml upgrade: pass 1” successfully.

TLDR version

Microsoft’s database level CDC trigger tr_MScdc_ddl_event interferes with the Dynamics CRM import operation. Drop that trigger before the import it then add it back once it’s finished and you shouldn’t have issues with this error.

So that’s the end of the adventure right? Everything worked fine after that? Not even close! Stay tuned…

Using Powershell to Refresh a Test SQL Server Instance from Production

A project I’ve been wanting to work on for a long time is how to automate restores of our Production databases to the Test instance. There are a number of challenges associated with this. First, the restore has to be able to find which backup it needs to use. Secondly, many of our databases do not have their files configured in the same way (for example one may have a single .mdf and log whereas another may have multiple .ndf files). Third, restoring from a different instance causes the SQL Authentication users to be orphaned and therefore connections to the databases to fail. And this is just at the high level. There are many smaller complications along each of those roads. Because of these our restore model has typically been per-request, manually executed, and filled with many small and tedious steps. I wanted to replace it with one process that would restore all of the production databases from the proper backup, configure any “with MOVE” options dynamically, and fix any orphaned users. A secondary goal was also to make it as portable as possible so that I could easily reconfigure it for our other PROD/Test/Dev instances.

The result of this labor of love is copied below. You’ll notice that at the top you can configure the Source and Destination SQL instances to be used, as well as the Data\Log paths for both. This is the section that allows me to reconfigure it for other instances. You can make these parameters that are passed in if you want to call it manually. For my purposes it is a scheduled process that always runs against  specific instance. Following that section is an exclusion list, built to avoid system databases as well as any others you want to skip (use cautiously to avoid angering coworkers in the Development or Testing departments).

The only function is one called “FixOrphans”. If you’ve guessed that’s where the orphaned users are repaired, then you’re today’s winner! This works by pulling a list of available logins from the destination, creating a list of orphans, and using the Source to find which login users should be matched to.  It will also match “dbo” to a different login if that’s the way the Source database was configured. Of course, this breaks down if the logins are not named the same on both instances. This is the case for some of mine as well. In those cases I have a section at the bottom to take care of one-off scenarios by hard coding them. It isn’t ideal, but will have to do until I can change some policies in my organization.

A fun side note about the FixOrphans function. It started as a small section of the script, then quickly grew to become its own function. I became so interested in what could be done with it that I side-tracked for a while and wrote a stand-alone script just for repairing orphans. Being a faithful reader, you will remember it being covered in Fixing Orphaned SQL Users via PowerShell. So, the restore project was actually first and fixing orphans spun out of that. I then took that work and rolled it back into the restore script, so the code will be very similar to that first post.

After that function is declared we move on to the real work. A variable is created to store Source backup information so that we know which backup to use. It then loops through each of the objects stored in that variable. If the database is in the exclusion list it notes that and moves on. Otherwise, it sets a baseline for the restoration query and starts building on that. This is the portion that allows me to account for differing file configurations per database on the instance. For each database it will pull Source information about the files and dynamically build the MOVE statements. At the end it packages them all together and adds any remaining options, such as keep_cdc. After the full statement is built the script will set the database to single user, closing any connections. It will then drop the database and execute the restoration. Dropping isn’t entirely necessary, but our Test environment is often short on space. Dropping first allows me to free up room before starting the restore.

There are two things to note at this section. The first is that, while documentation will tell you that not specifying a timeout value for Invoke-Sqlcmd means that it’s unlimited, that simply isn’t true. Unless you specify one it will die after thirty seconds. Secondly, once the script successfully kicks off that restore command it will truck happily along its way to the next line, whether your restore finishes instantaneously or not (my wager is that it doesn’t). For that reason I built a Do…While loop to monitor for restore commands to finish and then allow the script to proceed. Otherwise it gets awkward trying to set your database to simple when it doesn’t actually exist yet. The commands to set that recovery option and shrink the log file are also both in the interest of saving space.

Once all of that is finished, it’s just a matter of resetting variables so that the next database has a clean run and calling the aforementioned FixOrphans function to give those SQL Auth users a home. After all of the elements in the array have been processed I write an error string that I’ve been compiling to a file and call it a day. Two files will be created. RefreshLog.txt contains all of the operational information, and Errors.txt contains, you guessed it, errors.

As with my other scripts, much of this will be different in your environment than it is in mine. However, I hope that this will be easily adaptable for your use and, if nothing else, that you can find useful elements that can be incorporated into  your own scripts. As always, feel free to send me any questions, and I hope this helps.

P.S. I chose to use Invoke-Sqlcmd so that all versions of SQL, new or old, would be compatible. For newer instances feel free to use SMO.

 

Using PowerShell to Perform Availability Group Failovers

In the past we’ve explored how to use PowerShell to automate failovers on SQL Failover Clusters for Windows Updates and other scheduled maintenance. But what if you’re using Availability Groups instead of the traditional cluster? Fear not, there is still a PowerShell option for that. Usually I would have our night team use SSMS to fail over instances, but recently I transitioned to having them use the PowerShell method below. There are two primary reasons for this. 1) The system administrator does not need to have SQL Server rights in order to carry out the failover and 2) having pre-written commands helps cut down on human error.

For the purposes of this example we have two nodes (AGNODE1 and AGNODE2), each having their own instances (SQLINSTANCE1 and SQLINSTANCE2) that are part of an Availability Group (AGINSTANCE). This will walk through the process of installing Windows Updates on each of those nodes. We will assume that at the outset AGNODE1 is the primary for AGINSTANCE.

  1. Install updates on AGNODE2 and reboot as necessary.
  2. Log into AGNODE2.
  3. Right-click PowerShell and click “Run as Administrator”.
  4. Make AGNODE2 the primary by running the following command: Switch-SqlAvailabilityGroup -Path SQLSERVER:\Sql\AGNODE2\SQLINSTANCE2\AvailabilityGroups\AGINSTANCE
  5. Confirm that the AG instance is now owned by AGNODE2 using this command: Get-ClusterGroup
  6. Install updates on AGNODE1 and reboot as necessary.
  7. Log into AGNODE1.
  8. Right-click PowerShell and click “Run as Administrator”.
  9. Make AGNODE1 the primary by running the following command: Switch-SqlAvailabilityGroup -Path SQLSERVER:\Sql\AGNODE1\SQLINSTANCE1\AvailabilityGroups\AGINSTANCE
  10. Confirm that the AG instance is now owned by AGNODE1 using this command: Get-ClusterGroup

A couple of key points to keep in mind are this: 1) you must run the command from the destination server and 2) all of the confusing syntax is simply specifying the node, instance name, and Availability Group name for that destination server.

I hope this helps, and as always feel free to send me any questions!

Fixing Orphaned SQL Users via PowerShell

In SQL Server, a login is an instance level object that is used for authentication. It is mapped to a database user, which controls permissions at the database level. These objects (login and user) are tied to one another via a SID. If the login is deleted and then recreated, or if you restore your production database to a test environment, those SID’s will no longer match and orphaned users will result.

Information on how to resolve this situation can be found on MSDN. However, if you need to fix more than one user this can be painful. It also does not work for the “dbo” user, requiring you to take additional steps in order to repair both. In the interest of handling refreshes to our development and test instances in a more efficient way I’ve created the script below. It takes two parameters, one for the source instance and another for the destination. It will cycle through the databases on the destination instance and query the source for login information pertaining to any orphaned users. Some informational messages have been built in to help you identify issues, such as a login not existing on your destination instance.

There are a couple of disclaimers. This script assumes that your logins will have the same name on both instances. If your production instance has an account named “sqlsvc” and the test instance equivalent is named “sqlsvc_test”, then it will not sync the user to that login. For the situation I’m working with there is no reliable standard in account names for me to rely on. If your environment is more standardized then please feel free to build in that additional logic.

I hope this will be of help to those out there fatigued by running sp_change_users_login one user at a time. You call the script as “./FixOrphans.ps1 -SourceSQLInstance YourSourceName -DestSQLInstance -YourDestinationName”. If you don’t provide the parameters up front it will prompt you for them.

As always, if you have any questions please let me know.

 

Using PowerShell to Execute SQL Maintenance

It’s an odd truth that laziness leads to better systems administration. That is, so long as it spurs you to automate and thoroughly document a repetitive or tedious task. For instance, I was recently tasked with reducing the excessive size of some system tables in our Microsoft Dynamics CRM environment. To start with, I accomplished this the way you would any one-off task. I RDP’d to each of the app nodes, disabled the service that interacts with the tables we’re performing maintenance on, RDP’d to my utility box, used SSMS to disable each of the SQL Agent jobs that might interfere (on two different nodes), opened the script (provided by Microsoft), tweaked it to my liking, and executed it. The next morning I did all of this in reverse, starting with cancelling the script. For one evening this isn’t really a big deal. However, we soon realized that in order to get the record count down to where we wanted it that several iterations of this maintenance would have to occur over the course of multiple weekends. Reviewing all the steps I’d just performed, my thought was “ain’t nobody got time for that”.

Confronted with performing multiple GUI-based steps during each of these maintenance windows I did what any good/lazy Sys Admin does, I scripted it. Below you’ll find an example of what I used. I run it from PowerShell ISE, executing whichever block is applicable to what I want to do at the moment. This allowed me to go from starting up the maintenance in fifteen minutes to under one minute. (I know, 14 minutes isn’t a big deal. But when you’re tired and it’s late every minute counts.) As I mentioned before, my particular case is CRM maintenance. So basically I disable services on the app nodes, disable SQL Agent Jobs that might interfere (my database is in an Availability Group, so I disable them on both nodes), start the SQL Agent Job containing the Microsoft script referenced above, and then do it all in reverse the next morning at the end of the maintenance window. I included service status checks at the bottom because I’m paranoid and want to confirm the services are actually stopped before starting the SQL script. Also, I did not script the stopping of the job. I always hope (in vain) that the job will have finished, signaling the end of this particular maintenance need. Since both SSMS and the script run from my utilities box I check it in SSMS every morning and simply alt-tab over to ISE after stopping the job to start everything back up.

It’s unlikely that you’ll have the exact situation as me, but hopefully this can give you some ideas for how to incorporate these methods into your own work. In any case I hope this helps, and feel free to contact me with any questions.

 

The Faith of Atheism

The extent to which Christians are criticized for their faith always surprises me. In current American culture, and especially in science-based fields like IT, one is viewed as unintelligent or at least ignorant if they believe in any kind of religion. It is automatically assumed that if you find faith to be a credible notion then you must lack in basic deductive reasoning or logic. The only way to arrive at answers is by science, and to think that science could lead to God is laughable.

Take a moment to examine the foundations of this world view, though. In order to follow it you place a great deal of weight on two pillars, people and science. You must assume in the first place that it is at all possible for humans to fully understand the mysteries of the universe. Secondly, you must assume that science is a capable vehicle for arriving at those answers. I won’t claim to know what the full intellectual potential of humanity is, but it’s safe to say that at this point we do not know everything. I think any reasonable scientist would agree with that. As to the method, I’m as much a fan of science as anyone. In fact, I thoroughly believe that as we uncover more that science will lead us right back to God. However, science can only provide answers based on the answers it already has. Science is constantly disproving its own findings from decades before based on new information that has recently been acquired. It’s just the nature of how it works, and that’s okay. The trouble comes when people think, with a great amount of hubris, that people (who don’t understand everything) can use science (that hasn’t uncovered everything) to make definite declarations about the universe, its origins, and all it holds. Also, there’s the issue of scientists being bought or swayed to produce skewed results in support of a particular idea. They are only human, and it does happen.

So, to bring it back around to faith, people such as myself believe that God created the universe and all that’s in it. Take a moment to stand up, walk outside of the man-made rectangle you’re sitting in (where it’s so easy to feel in control), and literally think outside of the box. Look around at nature with all of its complexity and intricate detail. Take in the hundreds of types of life just within your yard at home or the grounds at your work. Then think about how, to date, not one other life-supporting planet so perfect as this has been found in all the known universe. Think about the vastness of space with all of the planets and stars it contains, most of which we’ve not viewed yet. Then tell me how you’re not living a life of faith by depending on people, who are equally as fallible and weak as you are, to not only understand all of that fully but to also rule out the existence of something they don’t understand in all the areas we haven’t yet observed.

Let me give you another example. Let’s say that based on today’s knowledge and the research of the world’s top minds you come to the conclusion that there is no God (what other reasonable conclusion is there, right?). Going on that information you live a life with no regard for God and die at an old age, having spent your years pursuing the things of this world. It’s possible you achieved fame, wealth, and enjoyed a long list of pleasures. Eighty years after your death science has progressed enough to explore all of the universe, to examine the basic building blocks of life, and finds conclusive evidence of God. You and all those who confidently followed the minds and science of your day will have missed God completely. There is no second chance to seek Him out. By contrast I will have lived a peaceful life of faith, crafting my actions on the advice of a God I believe to be infinitely wise, and then end my days without regret. Each of us will carry the same things from this life past the grave, nothing. But I have the hope of eternity with a loving creator after having spent what is, in retrospect, a very short time on this earth. If things go the other way and I’m the one that’s wrong then I’ve still lived that same peaceful life following wise teaching and, hopefully, doing good to those around me. I’ve lost nothing because we both end in nothing, and I’ve made a much smaller gamble.

In the end the atheist lives their life based on faith just as much as the religious among us. They like to think it is an intellectually superior position based on concrete evidence. However, in truth it is built on imperfect systems that were created by even more imperfect people. I choose to place my faith instead in a mighty God capable of creating all these things, and to view science as a way to find Him and the glories of His creation rather than as a way to dismiss Him. He is revealed through creation, through history, and through His work in the hearts of men (including my own). His name is Jesus Christ, He sacrificed Himself to set you free, and I’d love to tell you about Him sometime.