We were involved from the start on the design and build of a large scale enterprise data warehouse for a major communications provider using AWS Redshift. This was to replace an existing SQL Server data warehouse, and we had a mandate to look at all aspects of the infrastructure and the development lifecycle in order to greatly increase the scalability of the solution.
We decided on a set of basic principles. The first four were straightforward and it would have been surprising if we hadn’t included them. We added two more that were new to some people in a data warehousing context, and we got significant benefit from them.
The team were used to Scrum with two week sprints.
With a standard approach to building facts and dimensions, we knew it would be easier to maintain consistency, especially with a large number of developers.
3. MPP Best Practices
It wasn’t possible to build a whole team with developers who already had Redshift experience at the time, but we knew that developers with MPP experience would have an advantage.
4. Cloud Services Best Practices
Redshift was chosen after an in depth evaluation, during which it was determined that we would be making use of several other AWS services.
5. Push Button Environments and Deployments
To get the level of scalability we needed, we had to eliminate as many manual steps as possible so that there would be maximum consistency and minimum effort. But we also wanted to go much further than working the same way but with improved efficiency. In order to take advantage of the possibilities given by cloud services such as having temporary environments, we needed mechanisms that meant environment building and code deployment would not be a bottleneck.
6. Automated Testing
We need to be able to see if the data warehouse was behaving as we intended, and to maintain predictable behaviour when making large changes quickly.
We discussed a lot of other considerations, and many of these follow logically from the six main principles above or are enabled by them. Security is a good example of where following AWS best practices is likely to give more robust control than more traditional options.
Using automated testing we were able to do significant refactoring of fundamental mechanisms and have very high confidence that the code would work as intended. It was also valuable for coordinating the assessment of the effect of less fundamental changes across the team - because a comprehensive set of tests was automatically being run, we were nearly always ready to deploy, and if we weren’t, we knew exactly which change had caused which failure and could take action quickly and effectively.
With the push button deployments we could do several deployments per day to a running system, without any effort or need to plan for outage. Best of all was how quickly code could go from development into production, and it was audited and tested.
We got a lot of value out of push button environments and deployments and automated testing, and would highly recommend using them in a data warehouse, especially in the context of cloud services.