About LotLinx:

LotLinx is the best in class, AI-powered Inventory Management solution for the automotive industry. LotLinx uses advanced machine learning technology to help vehicle sellers analyze shopper data, identify high intent purchasers, and execute inventory-specific strategies to reduce days on lot and improve margin per vehicle.

Role Details:

Reporting to the Director of Machine Learning & Artificial Intelligence, the Data Engineer will be responsible for the development and deployment of cloud-first ETL processes, Data management, Data Warehousing and more in the Automotive Digital Advertising industry. LotLinx is looking for a candidate that has talent with data to improve, optimize and lead further development of our data aggregation processes.

Required Skills (must haves):

BS degree in Computer Science or related technical field, or equivalent practical experience.
Strong analytic skills related to working with unstructured datasets.
Solid understanding and working knowledge of relational or non-relational databases.
Proficiency in a major programming language (e.g. Java/C) and/or a scripting language (scala/php/python).
Experience with Data gathering, Data pipelining, Data Standardization, Data Cleansing, Stitching aspects.
Innately curious and organized with the drive to analyze data to identify deliverables, anomalies, and gaps and propose solutions to address these findings.
Please highlight experience with GCP, BigQuery, Airflow, DBT, Kubernettes, Stitch or similar technologies in application to this role.

Responsibilities:

Work with stakeholders including Analytics, Product, and Design teams to assist with data related technical issues and support their data infrastructure needs.
Engineer solutions for large data storage, management, and curation of training data models.
Explore available technologies and design solutions to continuously improve our data quality, workflow reliability, scalability while reporting performance and capabilities.
Act as an internal expert in each of the data sources so that you can own overall data quality.
Design, build and deploy new data models, ETL pipelines into production and data warehouse.
Define and manage the overall schedule and availability of all data sets.