Pipelines: Process and Reprocess for multiple pipelines

Description

PRDE - Story default text according to the team DoR (Definition of Ready)

01 - PERSON OF CONTACT (PERSON THAT CAN ANSWER QUESTIONS ABOUT THE PROBLEM):

@Robson Thanael Poffo @MARCOS STUMPF @Lais Machado Eing

02 - STORY BRIEFING (AS A < PERSONA >, I [WANT TO] < NEED >, [SO THAT] < GOAL >)

As an App Owner (TOTVS segment) I want to process and reprocess pipelines considering parameters, so that I can get more agility when processing only limited amount of records.

03 - GOAL (DESCRIBE THE PROPOSED SOLUTION)

  • Review the feature for Dev and Unified tenants (tenant admin):
    • Pause/Play: goal to avoid processing a pipeline with errors. Today it’s available only on Activity Management (scheduled tasks):
    • Reprocess from the last task: the goal is to reprocess the data using a partial period of time.
    • Reprocess all data (without removing old data): reprocess all data, the goal is to fix a data issue caused by a pipeline with issues.
    • Clean and reprocess data: this flow today is right, it allows me to remove all data and reprocess the whole pipeline (all data).
  • Today I got the reprocessing isn't reprocessing all data, it reprocesses from the last success task ID - we need to make it clear (my initial understanding is reprocessing the whole data).
  • For all these processes, consider a way to run for more than one pipeline.

04 - PROBLEM (WHAT'S THE CURRENT PROBLEM SCENARIO OR PAIN TO BE RESOLVED?):

05 - WHO CAN USE THIS FEATURE (USER ROLES):

06 - ACTIVITY DIAGRAM (ACTIVITY DIAGRAM LINK AND IMAGE):

07 - STEPS (ACTIONS TO BE PERFORMED LINKING TO SCREENSHOTS):

08 - ALTERNATIVE STEPS:

09 - ASSETS (FIGMA LINKS, RELEVANT DOCUMENTATION LINKS, JSON EXAMPLES, ETC):

https://www.figma.com/file/FiL1iRG6RPjuNCgVzgGqvaZ3/%F0%9F%9F%A3-Carol-3.0?node-id=29721%3A193164

No pipelines selected:

One or more pipelines selected:

Popup to process single pipeline:

Popup to process multiple pipelines:

10 - ACCEPTANCE CRITERIA:

  • Flow should show most recent pipelines installed on the dev / unified tenant through Carol App.
  • The user can select multiple pipelines and execute a process flow for all selected pipelines. This is the main goal for this issue!
    • The service that process the data should validate if the pipeline name exists, returning an error in case it was requested to process a pipeline that does not exist. It should validate if the pipeline exists before starting the tasks.
    • It should create one task for each selected pipeline
    • Filter by period: tenant (only for Unified Tenant)
      • The user will select one or some tenants from a list of tenant name.
      • The parameter last task (if selected) must be discarded and disabled.
  • Process strategies (available when selecting multiple pipelines): All these items are working today, it is on this AC to make sure they keep working as expected when reprocessing multiple pipelines at once. Except Tenant Filter that will be handled on CAPL-3382.
    • When selecting “Filter by: period” checkbox, the advanced options radios must be disable, and the other way around
    • Filter by period: last task (handled on this card: )
      • The user decides to reprocess the pipeline considering the last success finished task
      • This flow will consider different date/time for different pipelines, in case we have multiple pipelines selected.
      • Show tooltip on “last task" with: “Last task successfully completed via pipeline orchestrator schedule.
    • Filter by period: 1 hour, 8 hours, 24 hours, personalized (minutes, hours, days):
      • The user can define from different strategies a period of time to reprocess the pipeline.
    • Filter by period: tenant (only for Unified Tenant) CAPL-3382 Done
      • The user will select one or some tenants from a list of tenant name.
      • The parameter last task (if selected) must be discarded and disabled.
    • Process all data:
      • This flow will reprocess all data, without considering a -timestamp- filter.
    • Clean and reprocess all data:
      • This flow will reprocess all data, without considering a -timestamp- filter.