At Crystalloids we work in a structured way to get maximum results in the shortest time.
- Information gathering about the existing architecture and high level goals.
- Writing backlog together the Product Owner, developers and Commercial Director
- Backlog refinement to get all necessary details right such as Definition of Done
- Backlog prioritisation by business importance
- Estimation of work for the selected user stories
- Design the architecture
- Implement the solution
- Demo working software in two sprints of two weeks
- Document and handover the code and documentation
- Review customer satisfaction
Instead of loading data from a source into BigQuery and directly build reports on these BigQuery tables, the best practice we implemented is to divide this process into three steps:
This step only takes data from the source and writes it to BigQuery in its raw form.
No transformations are made on the data, to make sure that you are looking at the same data as was received from the source. It can be useful to not only write it to BigQuery, but also to a file in Google Cloud Storage. For example, when you receive data in JSON format, this can be stored in its purest form.
This step takes the raw data and makes the transformations that are needed to be able to use the data. Make sure that:
- all data is deduplicated: no transformed table contains two identical rows.
- all data is actual: only one version of the data is present, the most recent one,
- all data is stored at the lowest granularity (no aggregation of raw data, only storing its lowest level)
Once the data is normalised and all entities are stored in tables, you are ready to start gathering the data that you want to use in your reports or for specific analyses. This should happen in a separate publish step. Here you also have the possibility to combine multiple entities and/or sources.
To orchestrate these steps and make sure that the next step will only be executed when the previous is finished, we used Google Workflows. Within this workflow, each step in the solution is handled by a Cloud function. For this case it was required to load the data from the source on a daily basis. Therefore the first step is triggered by Cloud scheduler, since it allows to trigger the process every day at the same time.