As a Junior Data Engineer, you will play a crucial role in the development, deployment, and maintenance of observability and monitoring solutions deployed alongside High Performance Computing (HPC) clusters and/or Generative Artificial Intelligence (GenAI) platforms. Observability and data engineering of metrics collected from HPC and AI clusters is crucial in understanding performance bottlenecks, forecasting energy costs and carbon intensity and predicting anomalies. Using modern data collecting tools and ML/Methods allows to perform these tasks up to being able to build Digital Twins of clusters. This role requires expertise in basic ETL (Extract / Transform / Load) workflows and the technologies that underpin them. This role includes a balanced mix of R&D work to continue the development on the solutions and support/deploy tasks to deliver and support customers, involving a mix of technical skills and customer interaction. Key Responsibilities: Solution Development: Improve available code base e.g. adding support for new CPU or accelerator architectures, fixing bugs, improving usability and adding new features in agreement with the Team Leader and Senior Developers. Troubleshooting: Identifying and resolving issues related to hardware, software and network connectivity. Documentation: Creating and maintaining documentation related to deployment operations, configurations and delivery processes. Client Interaction: Working directly with clients to understand carry out proof of concept analysis and providing technical support during and after installation. Collaboration: Working with the R&D team using the solutions to investigate the performance and reliability of new hardware being validated in the laboratory. Required Skills: Basic programming experience with Python Fundamental understanding of SQL and database concepts Knowledge of REST APIs and HTTP methods concepts Version control experience with Git Basic understanding of Docker/Podman Basic understanding of ETL processes Preferred Skills: Experience with data processing frameworks (e.g., Pandas, PySpark) Familiarity with cloud platforms (AWS, GCP, or Azure) Basic understanding of ML/DL/AI Frameworks (scikit-learn, PyTorch) Knowledge of data warehousing concepts Familiarity with other programming languages (Go, Rust, C, C++, Java) Experience with data visualization tools (Grafana) Basic understanding of Linux/Unix environments Flexibility: Willingness to travel to different customer sites. Qualifications: Bachelor’s degree in Computer Science, Information Technology, Information Engineering or related fields in a STEM area with equivalent field experience. Experience with AWS, Azure, or Google Cloud. Why E4 Computer Engineering? E4 Computer Engineering designs and implements very high-tech solutions for HPC Clusters, Cloud, Data Analytics, Artificial Intelligence and Hyper-Converged Infrastructure for Academic and Enterprise markets. For years, the company has collaborated with leading nationally and internationally research centers and is involved in national and European-level projects in HPC and AI. E4 continuously explores future scenarios to find practical and innovative solutions to complex computational demands and new application areas, anticipating technological transformation and providing reliable solutions in sophisticated contexts. E4 has a growing turnover of €24M with 70 employees. Learn more at: www.e4company.com Seniority level Entry level Employment type Full-time Job function IT Services and IT Consulting J-18808-Ljbffr