Data Engineer

Required Skills

sql
python
aws
data pipeline development
etl
data architecture
cloud platforms
data integration
docker
kubernetes
apache airflow
spark
hadoop
data governance
data security
problem-solving
analytical skills
written communication
verbal communication
remote collaboration

Job Description

Job Title: Data Engineer


Job Type: Full-time


Location: Remote


Job Summary:

Join our customer's team as a Data Engineer and play a pivotal role in building and optimizing scalable data pipelines and architectures. You will collaborate closely with cross-functional teams to deliver high-impact data solutions that empower data-driven decision-making across the organization.


Key Responsibilities:

  1. Design, construct, install, and maintain robust data pipelines and architectures for large-scale data processing.
  2. Integrate data from various sources, ensuring accuracy, consistency, and reliability.
  3. Collaborate with data scientists, analysts, and business stakeholders to understand requirements and deliver effective data solutions.
  4. Develop, implement, and optimize ETL processes to support business intelligence and analytics needs.
  5. Monitor, troubleshoot, and enhance performance of data systems, ensuring data quality and availability.
  6. Document data workflows, processes, and technical decisions clearly for both technical and non-technical audiences.
  7. Champion best practices in data engineering, including coding standards, version control, and continuous integration.



Required Skills and Qualifications:

  1. Proven experience as a Data Engineer or in a similar role working with complex data systems.
  2. Expertise in building and optimizing data pipelines, architectures, and data sets.
  3. Strong SQL skills and proficiency with at least one programming language (e.g., Python, Java, Scala).
  4. Experience with cloud-based data solutions (AWS, GCP, or Azure).
  5. Demonstrated ability to communicate complex concepts clearly in both written and verbal form.
  6. Strong analytical and problem-solving skills with high attention to detail.
  7. Ability to work independently in a fully remote environment while collaborating effectively with a distributed team.



Preferred Qualifications:

  1. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes, Airflow).
  2. Background in big data technologies such as Spark or Hadoop.
  3. Prior exposure to data governance and security best practices.

Please note that by applying & completing our interview process, you will be added to our talent pool. This means you’ll be considered for this and all other possible roles that may match your skills. These potential opportunities will be sent your way as a micro1 certified candidate.

Have any questions? See FAQs