What are the responsibilities and job description for the Sr Data Engineer position at Sol-Ark?
Location: This position will be onsite at our corporate offices in Allen, Texas (Dallas-Fort Worth area).
- Candidates must be legally authorized to work in the United States without requiring employer-sponsored visa sponsorship now or in the future.
- At this time, Sol-Ark is unable to consider candidates on F1-OPT, H1B or CPT status.
We are seeking an experienced Data Engineer
with 10 years of professional experience, specifically focused on developing
Data pipelines, Data Lakes and data warehouses for structured and non-structured
data from millions of IoT Devices transmitting data to the cloud. The
ideal candidate will have a strong technical knowledge of relational and NoSQL DB,
data migration, data wrangling, Pre/Post processing, and Database administration.
The candidate will have excellent problem-solving skills, and a proven track
record of delivering high-quality Data warehousing solutions. Candidate must be authorised to work in the
US for any employer.
Responsibilities:
- Ensure the high availability, low latency, high performance, efficiency, and stability of critical data infrastructure, supporting a range of data platforms for micro-services, Apis, and custom backend Java/JEE components.
- An Expert in design and development of real time and offline batch processing of data pipelines for various custom enterprise Java Applications on AWS cloud systems.
- An Expert in setting up different databases between environments, for multiple applications
- Expert at continuous data releases for different products and transferring data between environments.
- Collaborate with software and firmware development teams to design and implement solutions that create analytics, reporting and related monitoring solutions.
- Optimize resources (DB, Network Configurations etc.) utilization and minimize unnecessary expenditure on IT Data infrastructure.
- Develop and maintain confluence documentation for tools and pipelines.
- Collaborate frequently with software developers to configure AWS Cloud IT Data security, QA and integration development environments.
Requirements
- Building Data Pipelines to extract data form multiple sources and make data usable for business objectives.
- Building Data Warehousing to prepare structured and non-structured data for use for analytics, and reporting.
- Building Data Lakes to manage large volumes of the data, to move, transform, and analyze the data between cloud and on-premises.
- Expertise in Data and Database migrations between different environments and applications. Work as a DBA, ability to manage and transfer the data between different applications.
- Expert level knowledge in following areas:
-Infrastructure (AWS, Docker or related)
-Database (MySQL, MongoDB, InfluxDB or similar time series)
-DB SQL Languages (SQL, NoSQL, Python)
-ETL/Batch processes (Spring Batch, Java)
-Big Data (Hadoop, Apache Spark, Apache Parquet or similar)
- Excellent communication skills both vertically and horizontally within the organization related to all aspects of technical leadership – written and verbal
- Experience in setting CICD /DevOps for Data Management.
- Experience in managing APIs via Gateway, SWAGGER or similar tools for exposing data.
- Experience in Securing all the custom data and DB on AWS.
- Experience in setting up data ingest points for real-time IOT applications.
- Familiarity with TLS 1.2/1.3 implementations.
- Understanding of electrical systems and power distribution.
Benefits