united consult
crm solutions
big data
software development
Quality Engineering
cybersecurity
other-it
non-it
agile

Data Engineer - 5113

Hybrid
Hungarian
Budapest
years of experience

What is it like to work in UC colors?

One thing’s certain all 350+ of our colleagues would say the same: it’s a truly unique and unrepeatable experience. At UC, professional growth and personal development go hand in hand. From tailored professional trainings to team-building events, we make sure work is both inspiring and fun.

You can enjoy the vibe at IT Fest, enjoy the scenic Balaton Circle on two wheels or on foot, unwind in our cozy kUCkó, unleash your creativity during themed craft afternoons, or make a difference through our charity events.

About the client/project

Tasks

  • Design, development, and maintenance of analytical data models using dbt.
  • Translating business requirements into logical and physical data models.
  • Designing and developing dimensional data models (facts & dimensions, star / snowflake schemas).
  • Implementing SCD Type 2 solutions to handle temporal changes in dimensions.
  • Managing historical data and ensuring traceability and retrievability.
  • Designing data quality checks, validations, and automated tests in dbt.
  • Designing and optimizing SQL-based transformations on large datasets.
  • Designing and implementing data loading and transformation processes.
  • Analyzing the quality, consistency, and changes of data coming from source systems.
  • Documenting and maintaining data models (dbt docs, lineage).
  • Close collaboration with business analysts, data scientists, and other technical stakeholders.

Necessary knowledge/technologies used

  • Strong SQL knowledge, including: handling historical data, using analytical (window) functions, performance optimization of complex queries
  • Experience with dbt, with a focus on: model structures and layering, applying best practices, testing and documentation
  • Data modeling experience: logical and physical data models, analytical (dimensional) data models
  • Confident use of Python for data processing tasks.
  • Knowledge and active use of Python DataFrame libraries (pandas, polars, dask, or similar).
  • Version control (Git) and a basic CI/CD mindset.

It gives you an advantage

  • Experience in a Databricks environment (notebooks, basic workflows).
  • Basic knowledge of the Spark DataFrame API and Spark SQL.
  • Understanding of Delta Lake concepts (schema evolution, time travel, ACID behavior).
  • Basic Spark performance optimization knowledge (partitioning, caching).
  • Experience in cloud environments, primarily on the Azure platform.
  • Familiarity with data processing pipelines or orchestration tools (e.g., Databricks Jobs, Airflow).

Why you should join us

  • Because we can count on each other, we form a professional and human community.
  • Opportunity to participate in professional trainings, conferences.
  • With your success and professional knowledge, you can build your career continuously.
  • Stable company, market leading customers.
  • You can participate in exciting team builders and events.
Don't we have a position open for you now?

Join our Talent Pool and be the first to know about new opportunities!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.