PulsePoint is seeking a Senior Data Engineer to join their rapidly growing Data Engineering team, responsible for processing vast amounts of data daily. The role involves designing and maintaining scalable data processing systems while collaborating with a diverse team.
Sr. Data Engineer PulsePoint Data Engineering team plays a key role in our technology company that's experiencing exponential growth. Our data pipeline processes over 80 billion impressions a day (> 20 TB of data, 200 TB uncompressed). This data is used to generate reports, update budgets, and drive our optimization engines. We do all this while running against tight SLAs and provide stats and reports as close to real-time as possible. The most exciting part about working at PulsePoint is the enormous potential for personal and professional growth. We are always seeking new and better tools to help us meet challenges such as adopting proven open-source technologies to make our data infrastructure more nimble, scalable and robust. Some of the cutting-edge technologies we have recently implemented are Kafka, Spark Streaming, Presto, Airflow, and Kubernetes. What You'll Be Doing • Design, build, and maintain reliable and scalable enterprise-level distributed transactional data processing systems for scaling the existing business and supporting new business initiatives • Optimize jobs to utilize Kafka, Hadoop, Presto, Spark, and Kubernetes resources in the most efficient way • Monitor and provide transparency into data quality across systems (accuracy, consistency, completeness, etc) • Increase accessibility and effectiveness of data (work with analysts, data scientists, and developers to build/deploy tools and datasets that fit their use cases) • Collaborate within a small team with diverse technology backgrounds • Provide mentorship and guidance to junior team members Team Responsibilities • Ingest, validate and process internal & third party data • Create, maintain and monitor data flows in Python, Spark, Hive, SQL and Presto for consistency, accuracy and lag time • Maintain and enhance framework for jobs (primarily aggregate jobs in Spark and Hive) • Create different consumers for data in Kafka using Spark Streaming for near time aggregation • Tools evaluation • Backups/Retention/High Availability/Capacity Planning • Review/Approval - DDL for database, Hive Framework jobs and Spark Streaming to make sure they meet our standards Technologies We Use • Python - primary repo language • Airflow/Luigi - for job scheduling • Docker - packaged container image with all dependencies • Graphite - for monitoring data flows • Hive - SQL data warehouse layer for data in HDFS • Kafka - distributed commit log storage • Kubernetes - distributed cluster resource manager • Presto/Trino - fast parallel data warehouse and data federation layer • Spark Streaming - near time aggregation • SQL Server - reliable OLTP RDBMS • Apache Iceberg • GCP - BigQuery for performance, Looker for dashboards Requirements • 6+ years of data engineering experience • Fluency in Python and SQL • Strong recent Spark experience • Experience working in on-prem environments • Hadoop and Hive experience • Experience in Scala/Java is a plus (Polyglot programmer preferred!) • Proficiency in Linux • Strong understanding of RDBMS and query optimization • Passion for engineering and computer science around data • East Coast U.S. hours 9am-6pm EST; you can work fully remotely • Notice period needs to be less than 2 months (or 2 months max) • Knowledge and exposure to distributed production systems i.e Hadoop • Knowledge and exposure to Cloud migration (AWS/GCP/Azure) is a plus Location: We can hire as FTE in the U.S., UK and Netherlands. We can hire as long-term contractor (independent or B2B) in most other countries. Selection Process: 1) Recruiter Screen (30 mins) 2) Hiring Manager Interview (45 mins) 3) CodeSignal Online Assessment (90 mins) 4) Tech Challenge 5) Interview with Sr. Data Engineer (60 mins) 6) Team Interviews (90 mins + 3 x 45 mins) + SVP of Engineering (30 mins) 7) WebMD Sr. Director, DBA (30 mins) Note that leetcode-style live coding challenges will be involved in the process. WebMD and its affiliates is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, ancestry, color, religion, sex, gender, age, marital status, sexual orientation, gender identity, national origin, medical condition, disability, veterans status, or any other basis protected by law.
PulsePoint is seeking a Senior Data Engineer to join their rapidly growing Data Engineering team. This role involves designing and maintaining scalable data processing systems while utilizing cutting-edge technologies.
Cognizant Technology Solutions is seeking a Software Engineer with expertise in Java and DevOps for a temporary remote position in Phoenix, AZ. The role focuses on building and maintaining CI/CD pipelines, ensuring high-quality code deployments, and providing production support.
The Principal Cyber Security Data Analyst at UnitedHealth Group is responsible for investigating workplace incidents and data events, ensuring thorough data collection and analysis. This remote role requires collaboration with cross-functional teams to enhance data security and integrity.
Jaggaer is seeking an experienced Architect-level Data Scientist to lead the design and implementation of AI-driven data solutions. This remote position focuses on transforming complex data into actionable intelligence for global customers.
PulsePoint is seeking a Senior Data Engineer to join their rapidly growing Data Engineering team, responsible for processing vast amounts of data daily. The role involves designing and maintaining scalable data processing systems while collaborating with a diverse team.
Booz Allen Hamilton is seeking a Senior Data Scientist in Honolulu, Hawaii, to develop and deploy machine learning models and analytics solutions. The role emphasizes collaboration and innovation in transforming data into actionable insights.
PulsePoint is seeking a Senior Data Engineer to join their rapidly growing Data Engineering team. This role involves designing and maintaining scalable data processing systems while utilizing cutting-edge technologies.
Cognizant Technology Solutions is seeking a Software Engineer with expertise in Java and DevOps for a temporary remote position in Phoenix, AZ. The role focuses on building and maintaining CI/CD pipelines, ensuring high-quality code deployments, and providing production support.
The Principal Cyber Security Data Analyst at UnitedHealth Group is responsible for investigating workplace incidents and data events, ensuring thorough data collection and analysis. This remote role requires collaboration with cross-functional teams to enhance data security and integrity.
Jaggaer is seeking an experienced Architect-level Data Scientist to lead the design and implementation of AI-driven data solutions. This remote position focuses on transforming complex data into actionable intelligence for global customers.
PulsePoint is seeking a Senior Data Engineer to join their rapidly growing Data Engineering team, responsible for processing vast amounts of data daily. The role involves designing and maintaining scalable data processing systems while collaborating with a diverse team.
Booz Allen Hamilton is seeking a Senior Data Scientist in Honolulu, Hawaii, to develop and deploy machine learning models and analytics solutions. The role emphasizes collaboration and innovation in transforming data into actionable insights.
PulsePoint is seeking a Senior Data Engineer to join their rapidly growing Data Engineering team. This role involves designing and maintaining scalable data processing systems while utilizing cutting-edge technologies.
Cognizant Technology Solutions is seeking a Software Engineer with expertise in Java and DevOps for a temporary remote position in Phoenix, AZ. The role focuses on building and maintaining CI/CD pipelines, ensuring high-quality code deployments, and providing production support.
PulsePoint is seeking a Senior Data Engineer to join their rapidly growing Data Engineering team, responsible for processing vast amounts of data daily. The role involves designing and maintaining scalable data processing systems while collaborating with a diverse team.