RIT Solutions, Inc. is seeking a Kafka and Data Lake Engineer in Durham, North Carolina, to design and manage data pipelines and oversee data lake architecture. The role requires expertise in Apache Kafka, big data technologies, and cloud platforms.
Responsibilities Design data pipelines: Build robust, scalable, and secure data pipelines to ingest, process, and move data from various sources into the data lake using Kafka. Administer Kafka clusters: Deploy, configure, and maintain Kafka clusters and related ecosystem tools, such as Kafka Connect and Schema Registry, ensuring high availability and performance. Manage the data lake: Oversee the architecture and governance of the data lake, including managing data storage (e.g., in AWS S3 or ADLS), security, and metadata. Develop data processing applications: Create producers and consumers to interact with Kafka topics using programming languages like Python, Java, or Scala. Perform stream processing: Use tools like Kafka Streams, Apache Flink, or ksqlDB to perform real-time data transformations and analytics. Ensure data quality and security: Implement data quality checks, manage data lineage, and enforce security controls such as encryption, access controls (ACLs), and compliance (e.g., GDPR). Monitor and troubleshoot: Set up monitoring and alerting for Kafka and data lake infrastructure and respond to incidents to ensure operational reliability. Collaborate with teams: Work closely with data scientists, analysts, and other engineering teams to understand data requirements and deliver reliable data solutions. Essential skills and qualifications Experience: Proven experience designing and managing data platforms with Apache Kafka and big data technologies. Programming: Strong proficiency in languages like Python, Java, or Scala. Big data technologies: Expertise in big data processing frameworks, such as Apache Spark and Apache Flink. Cloud platforms: Hands-on experience with cloud environments (AWS, Azure, or GCP) and relevant services like S3, Glue, or Azure Data Lake Storage. Data lake architecture: A solid understanding of data lake design principles, including storage formats (e.g., Delta Lake, Apache Iceberg), data modeling, and governance. Databases: Experience with various database systems, including both SQL and NoSQL. Infrastructure management: Familiarity with infrastructure-as-code tools like Terraform or Ansible and containerization with Docker and Kubernetes. Professionals in this field can advance from entry-level data engineering positions to senior roles, and then to a Big Data Architect or Solutions Architect, where they oversee large-scale data infrastructure. Education: Bachelor's Degree - Degree in engineering, or a related scientific or technical discipline is required. Substitution for Education - 10 years or 8 years of additional relevant experience may be substituted for education Relevant certifications Pursuing certifications can validate your expertise and boost your career. For Kafka: Confluent Certified Administrator for Apache Kafka (CCAAK) Confluent Certified Developer for Apache Kafka (CCDAK) For Data Lake and Cloud: Databricks Certified Data Engineer AWS Certified Data Engineer Microsoft Certified: Azure Data Engineer Associate
Join EY as a Manager in Financial Services focusing on Data Management and Strategy, where you will lead teams to help clients optimize their data governance and quality. This role offers an opportunity to work with diverse clients in a dynamic environment, driving innovative solutions in the financial services sector.
Vinsys Information Technology Inc is seeking a Data Engineer proficient in Databricks, Python, and SQL to develop data pipelines and cloud-based solutions. The role involves collaboration with product managers and data scientists to ensure high-quality software delivery.
Infosys Limited is seeking an Azure and Snowflake Data Engineer in Boston, MA to enable digital transformation for clients. The role requires strong technical proficiency in Azure, Snowflake, and Python, along with experience in data engineering.
Franklin Fitch is seeking a Data Engineer with expertise in Databricks and Snowflake to modernize enterprise data integration and analytics. This 12-month hybrid contract role is based in Atlanta, Georgia.
RIT Solutions, Inc. is seeking a Kafka and Data Lake Engineer in Durham, North Carolina, to design and manage data pipelines and oversee data lake architecture. The role requires expertise in Apache Kafka, big data technologies, and cloud platforms.
Empower Professionals is seeking a Data Engineer & AI Specialist with expertise in AI/ML and data engineering technologies for a hybrid role in Austin, TX. The ideal candidate will have advanced skills in Python, R, and various data warehousing technologies.
Join EY as a Manager in Financial Services focusing on Data Management and Strategy, where you will lead teams to help clients optimize their data governance and quality. This role offers an opportunity to work with diverse clients in a dynamic environment, driving innovative solutions in the financial services sector.
Vinsys Information Technology Inc is seeking a Data Engineer proficient in Databricks, Python, and SQL to develop data pipelines and cloud-based solutions. The role involves collaboration with product managers and data scientists to ensure high-quality software delivery.
Infosys Limited is seeking an Azure and Snowflake Data Engineer in Boston, MA to enable digital transformation for clients. The role requires strong technical proficiency in Azure, Snowflake, and Python, along with experience in data engineering.
Franklin Fitch is seeking a Data Engineer with expertise in Databricks and Snowflake to modernize enterprise data integration and analytics. This 12-month hybrid contract role is based in Atlanta, Georgia.
RIT Solutions, Inc. is seeking a Kafka and Data Lake Engineer in Durham, North Carolina, to design and manage data pipelines and oversee data lake architecture. The role requires expertise in Apache Kafka, big data technologies, and cloud platforms.
Empower Professionals is seeking a Data Engineer & AI Specialist with expertise in AI/ML and data engineering technologies for a hybrid role in Austin, TX. The ideal candidate will have advanced skills in Python, R, and various data warehousing technologies.
Join EY as a Manager in Financial Services focusing on Data Management and Strategy, where you will lead teams to help clients optimize their data governance and quality. This role offers an opportunity to work with diverse clients in a dynamic environment, driving innovative solutions in the financial services sector.
Vinsys Information Technology Inc is seeking a Data Engineer proficient in Databricks, Python, and SQL to develop data pipelines and cloud-based solutions. The role involves collaboration with product managers and data scientists to ensure high-quality software delivery.
RIT Solutions, Inc. is seeking a Kafka and Data Lake Engineer in Durham, North Carolina, to design and manage data pipelines and oversee data lake architecture. The role requires expertise in Apache Kafka, big data technologies, and cloud platforms.