Back
Back
Back
Back

Customer Job

Senior..Developer. (Senior Data Engineer)

Job ID: 23-02126
Description:
The Data Engineer will help build and maintain the cloud DeltaLake platform leveraging Databricks. Candidates will be expected to contribute to all stages of the data lifecycle including data ingestion, data modeling, data profiling, data quality, data transformation, data movement, and data curation. This job is located in-office at our World Headquarters in Kansas City.

Job Responsibilities may include:

Design, implement (deploy) and support on-premise and cloud-based data infrastructure (systems, flow) that are resilient to disruptions and failures

Enhance and support corporate SQL/NoSSQL database, DWH assets and streaming data solutions

Ensure high uptime for all data services and consider enhanced solutions through scheduled or event-driven design

Bring multi cloud/cross-platform agnostic technologies and practices into the system to enhance reliability and support rapid scaling of the business's data needs

Scale up our data infrastructure to meet cross-functional, multi industry business needs

Develop, leverage and maintain end-to-end data pipelines in production

Provide subject matter expertise and hands on delivery of data acquisition, curation and consumption pipelines on Azure, Databricks, AWS, Confluent

Responsible for maintaining current and emerging state of the art compute and cloud based solutions and technologies.

Build effective relationships with internal stakeholders

Familiarity with the technology stack available in the industry for metadata management: Data Governance, Data Quality, MDM, Lineage, Data Catalog etc.

Hands-on experience implementing analytics solutions leveraging Python, Spark SQL, Databricks Lakehouse Architecture, orchestration tools Kubernetes, Docker

All other duties as assigned

Requirements:
Bachelor's degree in Computer Science, Information Technology, Management Information Systems (MIS), Data Science or related field. Applicable years of experience may be substituted for degree requirement.

Minimum 8 years of experience in software engineering

Experience with leading large and complex data projects, preferred

Experience with large-scale data warehousing architecture and data modeling, preferred

Worked with Cloud-based architecture such as Azure, AWS or Google Cloud, preferred

Experience working with big data technologies e.g. Snowflake, Redshift, Synapse, Postgres, Airflow, Kafka, Spark, DBT, preferred

Experience implementing pub/sub and streaming use cases, preferred

Experience leading design reviews, preferred

Experience influencing a team's technical and business strategy by making insightful contributions to team priorities and approaches, preferred

Working knowledge of relational databases, preferred

Expert in SQL, Python, Java and high-level languages such as Scala, C#, or C preferred

Demonstrate the ability to analyze large data sets to identify gaps and inconsistencies in ETL pipeline and provide solutions for pipeline reliability and data quality, preferred

Experience in designing and implementing an infrastructure as code / CICD development environment, preferred

Proven ability to build, manage and foster a team-oriented environment

Excellent communication (written and oral) and interpersonal skills

Excellent organizational, multi-tasking, and time-management skills

CV or resume

Choose file
or drag and drop file here
For best results, upload *.doc/.docx/.pdf format files only (File size must be less than 2MB)

Personal information

Tell us something about yourself

How may I help you?