Job Summary Identify data needs for the people team and people-focused teams across Medline, understand specific requirements for metrics and analysis, and build efficient and scalable data pipelines to enable a science-driven people function. Job Description MAJOR RESPONSIBILITIES: Develop and maintain data pipelines using Azure Data Factory, Data Lake, and other Azure cloud services. Write, optimize, and troubleshoot SQL queries for data extraction, transformation, and reporting. Implement ETL processes to collect, transform, and load data into Azure environments. Leverage Python for data manipulation, automation, and integration with cloud services. Work with Azure Synapse Analytics, Azure SQL Database, and other Azure tools to support analytics and business intelligence initiatives. Collaborate with data scientists, analysts, and stakeholders to support data-driven initiatives. Work as data provider for different parts of the organization. Monitor and troubleshoot data pipelines for performance optimization. Document processes, workflows, and solutions for future reference. MINIMUM JOB REQUIREMENTS: Education: Bachelor’s degree in Computer Science, Information Systems, Data Science, or a related field. Work Experience: Minimum of 1 year of hands-on experience with Azure Cloud services (Azure Data Factory, Data Lake, Azure SQL Database, Azure Synapse Analytics). Knowledge / Skills / Abilities: Proficiency in Python for data manipulation and automation. Experience with Azure Fabric Strong SQL skills with the ability to write and optimize complex queries. Understanding of data architecture, ETL processes, and data warehousing principles. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. PREFERRED JOB REQUIREMENTS: Work Experience: Familiarity with data visualization tools like Power BI. Understanding of cloud data security and governance practices. Benefits - Medline is committed to offering competitive benefits and a variety of choices to best meet the needs of you and your family. For employees scheduled to work at least 30 hours per week, this includes health and well-being, financial fitness, career development, paid time off and more. Employees scheduled to work less than 30 hours per week can participate in the 401(k) plan, access the Employee Assistance Program (EAP), Employee Resource Groups (ERG) and Medline Service Corps. For a more comprehensive list of our benefits, please click here. Every day, we’re focused on building a more diverse and inclusive company, one that recognizes, values and respects the differences we all bring to the workplace. From doing what’s right to delivering business results, together, we’re better. Explore our Diversity, Equity and Inclusion page here. Medline Industries, LP is an equal opportunity employer. Medline evaluates qualified applicants without regard to race, color, religion, gender, national origin, age, sexual orientation, gender identity or expression, protected veteran status, disability/handicap status or any other legally protected characteristic. We empower people to share their best ideas. We create new opportunities for our customers and ourselves. And we work together to solve today’s toughest healthcare challenges. Come join our team and accelerate your career in healthcare! Introduce yourself to our recruiters and we'll get in touch if there's a role that seems like a good match. We’re a medical supply manufacturer and distributor with over 34,000 employees, working in 125+ countries. With the scale of one of the country’s largest companies and the agility of a family-led business, Medline is able to invest in its customers for the future and rapidly respond to a dynamically changing market with customized solutions.
Job Type
Fulltime role
Skills required
Azure, Synapse, Python, Fabric
Location
Location not specified
Salary
No salary information was found.
Date Posted
November 7, 2024
Medline Industries is seeking an Associate Data Engineer to develop and maintain data pipelines using Azure services. The role involves collaborating with teams to support data-driven initiatives and requires proficiency in Python and SQL.