Create an effective way of auditing all the changes to the Task status and loading it to BigQuery Table.
Description
Today, we only have the final status the task has received during an update. However, it would be helpful for developers and data engineers to have the historic of all the status the Task had during its lifecycle, helping to understand scenarios of sql processing and stale checks.
The objective of this card is to validate ways of doing this and choose the best one that could fit on our case. Considering:
1. Effort of implementation
2. Costs
3. Architecture Simplicity
4.Performance impacts
Also, consider that SRE has a way of loading a CloudSQL Table to a BigQuery table and the final destination of the audit on the task, should be on BQ, so developers, data engineers, cs and also product team could run analytical queries.
Also, we should consider that not all tasks types must be audited. For example, MERGE_TASKS do not need this feature.
Proposed solution
To solve this matter we proposed the following new architecture:
Check the up to date version of the drawing on Miro: https://miro.com/app/board/uXjVNF9OjeM=/?moveToWidget=3458764576181589944&cot=14
Summary
We will use the already existent TASK_CHANGED
event, that is published to NATS, and add an extra handler to it. This changes will then be sent to a PubSub topic, where they will be consumed by a BigQuery Subscription, where finally they’ll be stored in BigQuery ready to be consumed by analytics.
We performed a POC for the main premise of this architecture, with success.
- Effort of implementation - TODO: now need to split this architecture into the following implementation tasks
- Costs - We will have additional costs mainly on PubSub. We estimated a monthly bill of $60,00 for the very worst scenario, in which all changes to all tasks are sent to PubSub.
- Architecture Simplicity - Drawings above
- Performance impacts - minimal: we will add a PubSub publisher to the TASK_CHANGED event handler, which is already processed independently from the Tasks, so there will be no direct performance impact on tasks, and the indirect impacts will be negligible since it’s just an async publishing process. Most of the work will be done by the subscription in PubSub, having zero impact on our infrastructure.
@Gabriel DAmore Marciano ,
@Geny Isam Hamud Herrera , @Glaucio Scheibel , @Gabriel DAmore Marciano , @Lucas Noetzold , @Rodrigo Bechtold
This issue was planned to be delivered until 2024-01-22. You can check that by consulting the issue in the Due Date field.
Dates already planned for this issue: 2024-01-22, 2024-01-02
If External Issue Link field is filled, customer was also informed on JIRA TOTVS.
@Gabriel DAmore Marciano ,
@Pedro Buzzi ,
@Geny Isam Hamud Herrera ,
This issue was planned to be delivered until 2024-01-01. You can check that by consulting the issue in the Due Date field.
Dates already planned for this issue: 2024-01-01
If External Issue Link field is filled, customer was also informed on JIRA TOTVS.