About LotLinx:

LotLinx is a leading automotive SaaS Data company, utilizing AI-powered technology to provide our clients with an end-to-end Vehicle Management and Marketing System. We are experiencing tremendous growth and have an exciting opportunity as a Data Engineer based out of our Winnipeg office. (Remote work available.) LotLinx provides employees with a dynamic work environment that is challenging, team-oriented, and full of passionate people. We offer great incentives to our employees, such as competitive compensation and benefits, flex time off, and excellent career development opportunities.

Role Details:

Reporting to the Director of Machine Learning & Artificial Intelligence, the Data Engineer will be responsible for the development and deployment of cloud-first ETL processes, Data management, Data Warehousing and more in the Automotive Digital Advertising industry. LotLinx is looking for a candidate that has talent with data to improve, optimize and lead further development of our data aggregation processes.

Required Skills (must haves):

BS degree in Computer Science or related technical field, or equivalent practical experience.
Strong analytic skills related to working with unstructured datasets.
Solid understanding and working knowledge of relational or non-relational databases.
Proficiency in a major programming language (e.g. Java/C) and/or a scripting language (scala/php/python).
Experience with Data gathering, Data pipelining, Data Standardization, Data Cleansing, Stitching aspects.
Innately curious and organized with the drive to analyze data to identify deliverables, anomalies, and gaps and propose solutions to address these findings.
Please highlight experience with GCP, BigQuery, Airflow, DBT, Kubernettes, Stitch or similar technologies in application to this role.

Responsibilities:

Work with stakeholders including Analytics, Product, and Design teams to assist with data related technical issues and support their data infrastructure needs.
Engineer solutions for large data storage, management, and curation of training data models.
Explore available technologies and design solutions to continuously improve our data quality, workflow reliability, scalability while reporting performance and capabilities.
Act as an internal expert in each of the data sources so that you can own overall data quality.
Design, build and deploy new data models, ETL pipelines into production and data warehouse.
Define and manage the overall schedule and availability of all data sets.