Unlimited Job Postings Subscription - $99/yr!

Job Details

Data Engineer

  2025-09-14     Mondo Staffing     Burbank,CA  
Description:

Apply now: Data Engineer, Location is Hybrid (Burbank, CA). The start date is September 30, 2025, for this 4-month contract position with potential extension.
n
Job Title: Data Engineer
Location-Type: Hybrid (3 days onsite - Burbank, CA)
Start Date Is: September 30, 2025 (or 2 weeks from offer)
Duration: 4 months (Contract, potential extension)
Compensation Range: $50.00 - $55.00/hr W2
n
Job Description:
We are seeking a Data Engineer to join a product-oriented delivery team focused on building scalable, governed, and reusable data pipelines. This role is part of a collaborative pod environment, where engineers, product owners, architects, and analysts work together to deliver integrated and AI-ready data solutions. The Data Engineer will play a key role in implementing ETL/ELT pipelines, enabling data accessibility across applications, and ensuring compliance with governance and security standards.
n
Day-to-Day Responsibilities:
n

    n
  • Build & Maintain Pipelines: Develop ETL/ELT jobs and streaming pipelines using AWS services (Glue, Lambda, Kinesis, Step Functions). Write efficient SQL and Python scripts for ingestion, transformation, and enrichment. Monitor pipeline health, troubleshoot issues, and ensure SLA compliance.
  • n
  • Support Data Architecture & Models: Implement physical schemas aligned with canonical and semantic standards. Collaborate with application pods to deliver product-specific pipelines.
  • n
  • Ensure Data Quality & Governance: Apply validation rules, implement monitoring, and surface data quality issues. Tag, document, and register new datasets in the enterprise data catalog. Follow platform security and compliance practices (Lake Formation, IAM).
  • n
  • Collaborate in Agile Pods: Participate in sprint ceremonies, backlog refinement, and design reviews. Work closely with developers, analysts, and data scientists to clarify requirements and unblock dependencies. Promote reuse of pipelines and shared services across pods.
  • n
n
Requirements:
n
Must-Haves:
n
    n
  • 3-5 years of experience as a Data Engineer or in a related role.
  • n
  • Experience with SQL, Python, and AWS data services (Glue, Lambda, Kinesis, S3).
  • n
  • Familiarity with orchestration tools such as Airflow or Step Functions, and CI/CD workflows.
  • n
  • Problem-solving and debugging skills for pipeline operations.
  • n
n
Nice-to-Haves:
n
    n
  • Experience optimizing pipelines for both batch and streaming use cases.
  • n
  • Knowledge of data governance practices, including lineage, validation, and cataloging.
  • n
  • Exposure to modern data platforms such as Snowflake, Databricks, Redshift, or Informatica.
  • n
  • Strong collaboration and mentoring skills; ability to influence across pods and domains.
  • n
n
Soft Skills:
n
    n
  • Collaborative mindset and ability to thrive in agile, cross-functional teams.
  • n
  • Strong communication skills to work with both technical and non-technical stakeholders.
  • n


Apply for this Job

Please use the APPLY HERE link below to view additional details and application instructions.

Apply Here

Back to Search