Loading...

Data Engineer

Atlanta Hawks

{Fulltime}OfficeWith Experience
Apply Now

Atlanta, GA

Salary:

🥅 sports

Data Engineer

Who are we:
A professional basketball team and state-of-the-art arena/entertainment venue that specializes in creating memorable experiences for each guest we interact with. Some of our favorite things are live sports, concerts, comedy shows, family shows, and most any other world-class event you can think of, and we’re looking for someone who shares the same interests. We live for the fast-paced world of sports & live entertainment, and as such, we work hard, run fast, execute flawlessly, and party it up when it all comes together. Lastly, we strive to deliver wonderful experiences that create lasting memories, and we prefer to surround ourselves with those who are the best at what they do.

Who are you:
An enthusiastic lover of sports, live entertainment, and people. You have true passion for engaging in meaningful interactions and creating memorable experiences for all guests. You strive to be helpful, engaging, and knowledgeable of all things Atlanta Hawks and State Farm Arena. You enjoy being a part of an exciting and dynamic group, and you’re committed to continuously enhancing the productivity and effectiveness of your team. Lastly, you enjoy working hard and celebrating hard, and you’d be shocked if guests weren’t positively impacted by their interactions with you.

Job Summary: The Data Engineer is responsible for the end-to-end data lifecycle—from orchestrating high-frequency ingestion pipelines to ensuring reliable delivery through the activation layer. This role focuses on ingesting, transforming, and delivering reliable, high-quality data from a variety of internal and external sources into our cloud data ecosystem.

Working closely with Analytics Engineers, BI Engineers, and business stakeholders, the Data Engineer ensures that data is accessible, timely, and trustworthy—enabling analytics, reporting, marketing activation, and operational decision-making across the company.

In this position, you will play a key role in developing and optimizing the data ingestion and processing layers of our platform. You will be responsible for building robust ETL/ELT pipelines, managing data workflows, and ensuring efficient movement and storage of data across systems. This includes integrating with APIs, streaming platforms, and third-party systems to support both batch and real-time data use cases.

Key Responsibilities:

Data Pipeline Development & Orchestration

  • Design, build, and maintain scalable and reliable data pipelines to ingest data from various sources, including APIs, databases, and SaaS platforms.
  • Develop and optimize ETL/ELT workflows to efficiently process large volumes of data.
  • Implement orchestration frameworks and Databricks-native workflows to manage dependencies and scheduling.
  • Ensure pipelines are fault-tolerant, observable, and recoverable with appropriate logging and alerting.
  • Continuously improve pipeline performance, scalability, and cost efficiency.

Data Platform & Infrastructure

  • Design and maintain core components of the data platform, including data lakes, warehouses, and storage systems.
  • Optimize data storage formats and partitioning strategies for performance and cost.
  • Manage integrations with Databricks and other cloud data platform services.
  • Support both batch and streaming architectures where applicable.
  • Collaborate on infrastructure-as-code and environment management practices.

Data Quality, Reliability & Monitoring

  • Implement data validation and monitoring at ingestion and pipeline stages.
  • Ensure data completeness, freshness, and accuracy through automated checks and alerts.
  • Troubleshoot pipeline failures and data discrepancies, driving root cause resolution.
  • Establish SLAs/SLIs for data availability and pipeline performance.

Data Integration & Enablement

  • Partner with Analytics Engineers to ensure clean handoffs between raw and curated data layers.
  • Enable downstream consumers by delivering well-structured, reliable datasets.
  • Support integrations with downstream tools such as BI platforms, CDPs, and operational systems.
  • Contribute to improving data accessibility and usability across the organization.

Documentation, Governance & Best Practices

  • Document data pipelines, data sources, and system architecture.
  • Maintain clear data lineage and metadata practices.
  • Contribute to standards around naming conventions, code quality, and deployment workflows.
  • Participate in code reviews and CI/CD processes.

Requirements:

  • Strong proficiency in SQL and at least one programming language (Python preferred).
  • Deep understanding of Apache Spark architecture and hands-on experience with PySpark for large-scale data processing and performance optimization.
  • Deep understanding of dimensional data modeling concepts (Kimball) and practical experience building star schemas, including the definition of facts, dimensions, and slowly changing dimensions (SCDs).
  • Experience building and maintaining data pipelines (ETL/ELT).
  • Hands-on experience with Databricks and Databricks-native orchestration/workflow development.
  • Experience with Fivetran for data ingestion and source system connectivity.
  • Experience with dbt, including transformation development, testing, and documentation.
  • Experience with modern cloud data platforms and data engineering best practices.
  • Familiarity with APIs, data ingestion patterns, and data integration techniques.
  • Understanding of distributed data processing and storage concepts.
  • Experience with Git-based workflows (Github Actions) and CI/CD practices.
  • Strong problem-solving skills and ability to debug complex data issues.
  • Experience implementing data governance practices, including data quality frameworks, data lineage, metadata management, access controls, and compliance with organizational and regulatory standards.
  • Proven ability to collaborate cross-functionally with Analytics Engineers, BI teams, Data Scientists, and business stakeholders to translate data requirements into scalable technical solutions.

Preferred Qualifications:

  • Experience with data ingestion and data modeling Ticketmaster’s Archtics data is a big plus.
  • Experience with streaming technologies (Kafka, Kinesis, Pub/Sub).
  • Experience architecting and managing data solutions within the Azure ecosystem environment, including deep familiarity with Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), and Azure functions.
  • Exposure to Unity Catalog, Delta Lake, Databricks SQL, or serverless compute.
  • Experience with infrastructure-as-code (Terraform, CloudFormation).
  • Knowledge of data governance, cataloging, or metadata management tools.
  • Experience in a fast-paced, data-driven organization.

We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, age, disability, gender identity, marital or veteran status, or any other protected class.

If this opportunity looks exciting to you, please complete the application process. Go Hawks!

Apply Now

Similar Jobs by Country

Hundreds of jobs are waiting for you!

Subscribe to membership and unlock all jobs

Sports Analytics

We scan all major sports and leagues

Updated Daily

New jobs are added every day as companies post them

Refined Search

Use filters like skill, location, etc to narrow results

Alerts

You can get daily alerts in your email for a specific search

  • Access to job postings from top teams and companies

  • Daily updates and notifications based on your preferences

🎯 Over 90% of customers chose to renew their subscriptions after the initial sign-up

Monthly

$6.99

per month

âś“ Flexible monthly billing

âś“ Unlimited access to all jobs

âś“ Advanced filtering tools

âś“ Daily job alerts

âś“ Exclusive discount codes

âś“ Cancel anytime

BEST VALUE

Yearly

$39

per year • Only $3.25/mo

âś“ Save 50% vs monthly

âś“ Unlimited access to all jobs

âś“ Advanced filtering tools

âś“ Daily job alerts

âś“ Exclusive discount codes

âś“ Cancel anytime

Lifetime

$59

one-time • forever

âś“ Pay once, access forever

âś“ Unlimited access to all jobs

âś“ Advanced filtering tools

âś“ Daily job alerts

âś“ Exclusive discount codes

âś“ Best long-term value