Role: Azure Databricks Data Engineer Location: Chicago, IL (Onsite) Job Description: Design/Development: You will design and support the business s database and table schemas for new and existing data sources for the Lakehouse. Create and support the ETL in order to facilitate the movement of data into the Lakehouse. Through your work you will ensure The data platform is scalable for large amounts of data ingestion and processing without service degradation. Processes are built for monitoring and optimizing performance Implement Chaos Engineering practices and measures that allow the end-to-end infrastructure to function as expected, even if individual components fail Collaboration: You will be collaborative - working closely with Product Owners, application engineers, and other data consumers within the business in an attempt to gather and deliver high quality data for business-cases. Work closely with other disciplines/departments and teams across the business in coming up with simple, functional, and elegant solutions that balance data needs across the business Analytics: You will play an analytical role in quickly and thoroughly analyzing business requirements and subsequently translating the emanating results into good technical data designs. Document the data solutions, develop, and maintain technical specification documentation for all reports and processes. Skills you MUST have: 6+ years proven ability of professional Data Development experience 3+ years proven ability of developing with Azure Databricks or Hadoop/HDFS 3+ years of experience with PySpark/Spark 3+ years of experience with SQL 3+ years of experience developing with Python Full understanding of ETL concepts and Data Warehousing concepts Data modeling and query optimization skills, implementation experience of Data Vault, Star Schema and Medallion architecture Experience with CI/CD Experience with version control software Strong understanding of Agile Principles (Scrum) Experience with Azure Experience with Databricks Delta Tables, Delta Lake, Delta Live Tables Bonus Points For Experience In The Following Proficient with Relational Data Modeling Experience with Python Library Development Experience with Structured Streaming (Spark or otherwise) Experience with Kafka and/or Azure Event Hub Experience with GitHub SaaS / GitHub Actions Experience with Snowflake/ Exposure to BI Tooling (Tableau, Power BI, Cognos, etc.) Mandatory skills: Azure Databricks Pyspark Azure Data Factory
Job Type
Fulltime role
Skills required
Azure, Python, CI/CD, Agile, GitHub
Location
Chicago, Illinois
Salary
No salary information was found.
Date Posted
November 26, 2024
Lorven Technologies is seeking an Azure Databricks Data Engineer to design and support database schemas and ETL processes for a scalable data platform in Chicago, IL. The role involves collaboration with various teams to deliver high-quality data solutions and requires extensive experience in data development and Azure technologies.