What You'll Be Working On (Duties include but not limited to): - Data Engineering Strategy and Architecture: - Design and implement scalable, reliable, and efficient data pipelines to support clinical, operational, and business needs. - Develop and maintain architecture standards, reusable frameworks, and best practices across data engineering workflows. - Build automated systems for data ingestion, transformation, and orchestration leveraging cloud-native and open-source tools. - Data Infrastructure and Performance Optimization: - Optimize data storage and processing in data lakes and cloud data warehouses (Azure, Databricks). - Develop and monitor batch and streaming data processes to ensure data accuracy, consistency, and timeliness. - Maintain documentation and lineage tracking across datasets and pipelines to support transparency and governance. - Collaboration and Stakeholder Engagement: - Work cross-functionally with analysts, data scientists, software engineers, and business stakeholders to understand data requirements and deliver fit-for-purpose data solutions. - Review and refine work completed by other team members, ensuring quality and performance standards are met. - Provide technical mentorship to junior team members and collaborate with contractors and third-party vendors to extend engineering capacity. - Technology and Tools: - Use Databricks, DBT, Azure Data Factory, and SQL to architect and deploy robust data engineering solutions. - Integrate APIs, structured/unstructured data sources, and third-party systems into centralized data platforms. - Evaluate and implement new technologies to enhance the scalability, observability, and automation of data operations. - Other Responsibilities - Continuous Improvement: Proactively suggest improvements to infrastructure, processes, and automation to improve system efficiency, reduce costs, and enhance performance. Scope of Role: - Autonomy of Role: Work is performed under limited supervision - Direct Reports: No Physical Requirements: - This role requires 100% of work to be performed in a remote office environment and requires the ability to use keyboards and other computer equipment. Travel Requirements: - This is a remote position with less than 10% travel requirements. Occasional planned travel may be required as part of the role. What You Bring (Knowledge, Skills, and Abilities): - Strong expertise in Databricks, SQL, dbt, Python, and cloud data ecosystems such as Azure. - Experience working with structured and semi-structured data from diverse domains. - Familiarity with CI/CD pipelines, orchestration tools (e.g., Airflow, Azure Data Factory), and modern software engineering practices. - Strong analytical and problem-solving skills, with the ability to address complex data challenges and drive toward scalable solutions. Certifications, Education, and Experience: - Bachelor’s or master’s degree in computer science, Information Systems, Engineering, or a related field. - 5+ years of experience in data engineering with a proven track record of building cloud-based, production-grade data pipelines. Benefits (US Full-Time Employees Only): - Paid Time Off (PTO) and Company Paid Holidays - 100% Employer paid medical, dental, and vision insurance plan options - Health Savings Account and Flexible Spending Accounts - Bi-weekly HSA employer contribution - Company paid Short-Term Disability and Long-Term Disability - 401(k) Retirement Plan, with Company Match
Job Type
Remote role
Skills required
Azure, Python, CI/CD
Location
Remote, US
Salary
No salary information was found.
Date Posted
May 29, 2025
CareAccessResearch is seeking a Senior Data Engineer to design and maintain data pipelines for clinical and operational needs. This remote role requires expertise in Databricks, SQL, and cloud data ecosystems.