Hive Financial Systems is seeking a skilled Data Engineer to design and maintain cloud-based data pipelines using Microsoft Fabric and Azure. The role focuses on delivering high-quality data for analytics and operational use cases.
Job Title: Data Engineer Experience Required: 3–6+ years Overview We are seeking a skilled Data Engineer to design, build, and maintain robust cloud-based data pipelines and architectures. The ideal candidate will have hands-on experience working with Microsoft Fabric and the broader Azure data ecosystem. This role is focused on delivering reliable, high-quality data to power analytics, reporting, and operational use cases. Key Responsibilities Data Pipeline Development • Design, develop, and maintain scalable and efficient pipelines in Microsoft Fabric and Azure. • Implement ETL/ELT processes to integrate data from diverse sources. Architecture & Integration • Collaborate with data architects and analysts to deliver solutions aligned with business objectives. • Leverage Microsoft Fabric and Azure services to build integrated, cloud-native data platforms. Data Modeling & Warehousing • Develop data models to support reporting, analytics, and machine learning use cases. • Optimize data lakehouse/warehouse solutions for performance and cost-efficiency. Performance & Quality • Monitor, troubleshoot, and optimize pipelines for reliability, performance, and data quality. • Apply best practices in data governance, security, and compliance. Continuous Improvement • Evaluate new features and tools within Azure / Microsoft Fabric to improve efficiency. • Contribute to team knowledge-sharing and process improvements. Required Qualifications Professional Experience • 3–6+ years of experience in data engineering or related field. • Hands-on experience with Microsoft Fabric or Azure data platforms. Technical Skills • Strong SQL and data modeling skills. • Experience with ETL/ELT pipelines and orchestration (Azure Data Factory, Fabric pipelines, or similar). • Programming proficiency in Python and/or PySpark. • Familiarity with Azure Data Lake, Synapse, SQL DB, and Key Vault. • Exposure to NoSQL databases (e.g., MongoDB Atlas) is a plus. Soft Skills • Strong problem-solving and analytical mindset. • Effective communicator who can collaborate across teams. • Comfortable working in a fast-paced, cloud-first environment. Preferred Qualifications • Bachelor’s degree in Computer Science, Engineering, or related field. • Relevant Microsoft Azure certifications (e.g., Azure Data Engineer Associate). • Familiarity with DevOps practices, CI/CD, or containerization is a plus.
Sentinel is seeking a Data and Infrastructure Analyst to build scalable data pipelines and manage Azure-based data ecosystems. This role requires hands-on experience with cloud technologies and data transformation.
AT&T is seeking a Senior Azure Databricks Infrastructure Engineer to design and manage cloud infrastructure for Databricks development. The role involves creating scalable, secure environments using Terraform and supporting data engineering workflows.
Join Motion Recruitment as a Data Infrastructure Engineer to design and optimize cloud-based data solutions on Azure. Leverage your expertise in Databricks and data engineering to revolutionize data management.
Hive Financial Systems is seeking a skilled Data Engineer to design and maintain cloud-based data pipelines using Microsoft Fabric and Azure. The role focuses on delivering high-quality data for analytics and operational use cases.
GovCIO is seeking a Back-End Data/Infrastructure Engineer to design and maintain data pipelines and support analytics through Power BI. This fully remote position requires extensive experience in cloud data integration and SQL.
Parametrix is seeking an Applied Data Scientist to lead their Data & Analytics team, focusing on AI and machine learning applications in the AEC and infrastructure sectors. This hybrid role involves collaborating with various stakeholders to deliver innovative, data-driven solutions.
Sentinel is seeking a Data and Infrastructure Analyst to build scalable data pipelines and manage Azure-based data ecosystems. This role requires hands-on experience with cloud technologies and data transformation.
AT&T is seeking a Senior Azure Databricks Infrastructure Engineer to design and manage cloud infrastructure for Databricks development. The role involves creating scalable, secure environments using Terraform and supporting data engineering workflows.
Join Motion Recruitment as a Data Infrastructure Engineer to design and optimize cloud-based data solutions on Azure. Leverage your expertise in Databricks and data engineering to revolutionize data management.
Hive Financial Systems is seeking a skilled Data Engineer to design and maintain cloud-based data pipelines using Microsoft Fabric and Azure. The role focuses on delivering high-quality data for analytics and operational use cases.
GovCIO is seeking a Back-End Data/Infrastructure Engineer to design and maintain data pipelines and support analytics through Power BI. This fully remote position requires extensive experience in cloud data integration and SQL.
Parametrix is seeking an Applied Data Scientist to lead their Data & Analytics team, focusing on AI and machine learning applications in the AEC and infrastructure sectors. This hybrid role involves collaborating with various stakeholders to deliver innovative, data-driven solutions.
Sentinel is seeking a Data and Infrastructure Analyst to build scalable data pipelines and manage Azure-based data ecosystems. This role requires hands-on experience with cloud technologies and data transformation.
AT&T is seeking a Senior Azure Databricks Infrastructure Engineer to design and manage cloud infrastructure for Databricks development. The role involves creating scalable, secure environments using Terraform and supporting data engineering workflows.
Hive Financial Systems is seeking a skilled Data Engineer to design and maintain cloud-based data pipelines using Microsoft Fabric and Azure. The role focuses on delivering high-quality data for analytics and operational use cases.