Responsibilities:
• Ingest data using tools such as Informatica PowerCenter (PWC).
• Develop and manage data pipelines with Azure Data Factory (ADF).
• Process, transform, and optimize data using Databricks.
• Collaborate with Product Owners and Tech Leaders to define and implement data solutions in agile sprints.
• Store and organize data in Azure Storage Accounts.
• Automate workflows and processes to optimize data ingestion and processing.
• Document processes and implement best practices throughout the data pipeline.
Skills:
Mandatory:
• Knowledge of Azure Data Factory (ADF) for orchestrating and automating data pipelines.
• Experience with Databricks for processing and analyzing large volumes of data.
• Experience working with SQL and running queries in various formats.
• Experience with Azure Storage Accounts, including containers, files, and data organization.
• Ability to work in agile teams using methodologies like Scrum.
• Knowledge of data formats such as Parquet and experience with data transfer using SFTP.
Nice to Have:
• Experience with Informatica PowerCenter (PWC) for data integration and processing; though "nice to have," it will be essential for development.
• Ingest data from SAP R/3 systems via IDocs.
• Knowledge of different data integration and automation tools.
• Experience in data security practices and data management in cloud environments.
• Familiarity with other storage and processing technologies in Azure environments.
• Experience with Jira, Confluence, WinSCP or FileZilla, Control-M.
Additional Skills:
• Clear communication and collaboration with cross-functional teams.
• Proactive in documenting and improving data workflows.
• Teamwork, commitment, and dedication.
Work Location : Lisbon
60% Remote and 40% Presential