RIT Solutions, Inc. is seeking a Kafka and Data Lake Engineer in Durham, North Carolina, to design and manage data pipelines and oversee data lake architecture. The role requires expertise in Apache Kafka, big data technologies, and cloud platforms.
Responsibilities Design data pipelines: Build robust, scalable, and secure data pipelines to ingest, process, and move data from various sources into the data lake using Kafka. Administer Kafka clusters: Deploy, configure, and maintain Kafka clusters and related ecosystem tools, such as Kafka Connect and Schema Registry, ensuring high availability and performance. Manage the data lake: Oversee the architecture and governance of the data lake, including managing data storage (e.g., in AWS S3 or ADLS), security, and metadata. Develop data processing applications: Create producers and consumers to interact with Kafka topics using programming languages like Python, Java, or Scala. Perform stream processing: Use tools like Kafka Streams, Apache Flink, or ksqlDB to perform real-time data transformations and analytics. Ensure data quality and security: Implement data quality checks, manage data lineage, and enforce security controls such as encryption, access controls (ACLs), and compliance (e.g., GDPR). Monitor and troubleshoot: Set up monitoring and alerting for Kafka and data lake infrastructure and respond to incidents to ensure operational reliability. Collaborate with teams: Work closely with data scientists, analysts, and other engineering teams to understand data requirements and deliver reliable data solutions. Essential skills and qualifications Experience: Proven experience designing and managing data platforms with Apache Kafka and big data technologies. Programming: Strong proficiency in languages like Python, Java, or Scala. Big data technologies: Expertise in big data processing frameworks, such as Apache Spark and Apache Flink. Cloud platforms: Hands-on experience with cloud environments (AWS, Azure, or GCP) and relevant services like S3, Glue, or Azure Data Lake Storage. Data lake architecture: A solid understanding of data lake design principles, including storage formats (e.g., Delta Lake, Apache Iceberg), data modeling, and governance. Databases: Experience with various database systems, including both SQL and NoSQL. Infrastructure management: Familiarity with infrastructure-as-code tools like Terraform or Ansible and containerization with Docker and Kubernetes. Professionals in this field can advance from entry-level data engineering positions to senior roles, and then to a Big Data Architect or Solutions Architect, where they oversee large-scale data infrastructure. Education: Bachelor's Degree - Degree in engineering, or a related scientific or technical discipline is required. Substitution for Education - 10 years or 8 years of additional relevant experience may be substituted for education Relevant certifications Pursuing certifications can validate your expertise and boost your career. For Kafka: Confluent Certified Administrator for Apache Kafka (CCAAK) Confluent Certified Developer for Apache Kafka (CCDAK) For Data Lake and Cloud: Databricks Certified Data Engineer AWS Certified Data Engineer Microsoft Certified: Azure Data Engineer Associate
Join Fisher Investments as a Machine Learning Operations Engineer to deploy and manage scalable AI solutions. Collaborate with data scientists and engineers to ensure efficient model operations and compliance.
RIT Solutions, Inc. is seeking a Kafka and Data Lake Engineer in Durham, North Carolina, to design and manage data pipelines and oversee data lake architecture. The role requires expertise in Apache Kafka, big data technologies, and cloud platforms.
AHU Technologies, Inc. is seeking a Big Data Architect and IT Consultant to design and implement a Big Data solution for the District of Columbia government. The role involves coordinating IT project management and developing modern data architectures using cloud platforms.
Insight Global is seeking a Lead Data Engineer with expertise in Databricks and Snowflake to design and implement data solutions. This onsite role in Tempe, Arizona requires strong leadership and technical skills to enhance the organization's data infrastructure.
GEICO is seeking a Data Analytics and Vertical Engineering Claims Domain Engineer to build high-performance platforms and applications. This role involves driving insurance business transformation through engineering excellence and collaboration.
Vinsys Information Technology Inc is seeking a Data Engineer proficient in Databricks, Python, and SQL to develop data pipelines and cloud-based solutions. The role involves collaboration with product managers and data scientists to ensure high-quality software delivery.
Join Fisher Investments as a Machine Learning Operations Engineer to deploy and manage scalable AI solutions. Collaborate with data scientists and engineers to ensure efficient model operations and compliance.
RIT Solutions, Inc. is seeking a Kafka and Data Lake Engineer in Durham, North Carolina, to design and manage data pipelines and oversee data lake architecture. The role requires expertise in Apache Kafka, big data technologies, and cloud platforms.
AHU Technologies, Inc. is seeking a Big Data Architect and IT Consultant to design and implement a Big Data solution for the District of Columbia government. The role involves coordinating IT project management and developing modern data architectures using cloud platforms.
Insight Global is seeking a Lead Data Engineer with expertise in Databricks and Snowflake to design and implement data solutions. This onsite role in Tempe, Arizona requires strong leadership and technical skills to enhance the organization's data infrastructure.
GEICO is seeking a Data Analytics and Vertical Engineering Claims Domain Engineer to build high-performance platforms and applications. This role involves driving insurance business transformation through engineering excellence and collaboration.
Vinsys Information Technology Inc is seeking a Data Engineer proficient in Databricks, Python, and SQL to develop data pipelines and cloud-based solutions. The role involves collaboration with product managers and data scientists to ensure high-quality software delivery.
Join Fisher Investments as a Machine Learning Operations Engineer to deploy and manage scalable AI solutions. Collaborate with data scientists and engineers to ensure efficient model operations and compliance.
RIT Solutions, Inc. is seeking a Kafka and Data Lake Engineer in Durham, North Carolina, to design and manage data pipelines and oversee data lake architecture. The role requires expertise in Apache Kafka, big data technologies, and cloud platforms.
RIT Solutions, Inc. is seeking a Kafka and Data Lake Engineer in Durham, North Carolina, to design and manage data pipelines and oversee data lake architecture. The role requires expertise in Apache Kafka, big data technologies, and cloud platforms.