You may find the following useful if you need to set up some checks for completeness of processing.
You may find the following useful if you can’t easily see which user stories relate to which functionality, so you can’t tell what the impact on delivery is of a problem with a user story, or you can’t easily get a view on when functionality is expected to be ready because you’re not sure what all the relevant user stories are.
You may find the following useful if you are getting distracted from useful work by efforts to confirm the detail of where your team and others have got up to.
You may find the following useful if you are developing a microservices based solution, and you have direct responsibility for multiple microservices. You want to avoid tight coupling between the microservices you’re directly responsible for, but you want to take advantage of easy communication between teams.
You may find the following useful if you are an architect in an agile team defining your role, or if it is unclear if what the team delivers conforms to the organisation’s architectural principles, or if you are being slowed down by an overload of communication relating to official approvals for delivery.
You may find the following useful if you or your team are creating over optimistic plans and then not achieving all of them, and there is resistance to reducing the commitment because the achievement will then be too low.
You may find the following useful if you have challenges running effective daily scrum meetings with a larger geographically distributed team. You might be finding that it takes more than 10 minutes, is awkward, shares no new understanding and results in no one doing anything different. Some people may be complaining that you’re not Agile enough and you need a smaller co-located team, and others may have decided that Agile is ineffective.
Following Episode 71 which was about DevOps for Big Data, Episode 72 focused on databases and it was great to be invited to take part in this one as well
It was great to be invited back onto the Continuous Discussions podcast for an episode about big data
When acquiring data for the data warehouse from source systems, it can be useful to make a clear distinction between the time at which an event occurred, and the time at which the event was recorded by the source system. In the simplest case, the source system records the event at the time it occurs and the anomalies described below do not happen. But in cases where there is a delay between the actual time of the event, and the time the record of the event is received by the source system, then there's a trap that needs to be avoided.
We are very pleased to announce that Cloud BI is now an AWS Consulting Partner
Large table rebuilds need to be handled by the build.
On Tuesday I participated in an online panel on the subject of Continuous Improvement, as part of Continuous Discussions (#c9d9), a series of community panels about Agile, Continuous Delivery and DevOps.
Is there a way to make use of the cost savings of a transient EMR cluster and still have the convenience of a long-running version?
This article about the recent S3 slowdown and recovery notes that AWS originally pursued the wrong root cause. There's always a risk of this happening. We discuss the benefits of the ability to revert changes here.
We found one particular type of data warehouse ELT logic test to provide particularly high benefits for very limited effort.
We created a mechanism that we called "The Federator" for making data processed on one Redshift cluster be available on other Redshift clusters. This post follows the introduction in the previous part 1 post, and describes how we solved the challenge of dealing with large data volumes.
We created a mechanism that we called "The Federator" for making data processed on one Redshift cluster be available on other Redshift clusters. This post introduces what we did.
How we built a solution that would keep Amazon Redshift in sync with SQL Server