Job Details | Sr. Data Engineer / Architect

Registered employers can post jobs, search for candidates, and/or post a company profile on ChicagoJobs.com

Quick Search
Run a quick search through the entire listings of jobs on this website. Filter your search by one, two or all three of the following criteria:





View Job

This job posting is no longer active on ChicagoJobs.com and therefore cannot accept online applications.


    

This posting cannot receive an online application from your ChicagoJobs.com account. To apply, follow the employer's instructions within their job description.

Zebra Technologies Corp.

Location: ChicagoIL 60607 Document ID: AC256-001M Posted on: 2018-07-0207/02/2018 Job Type: Regular

Job Schedule:Full-time
Minimum Education: Not Specified2018-08-01
 

Sr. Data Engineer / Architect

Sr. Data Engineer / Architect

Location US-IL-Chicago
 

Overview

Enterprise Intelligent Software (EIS) Business Unit within Zebra Technologies (www.zebra.com) seeks a lead Database Centric Data engineer to develop a high performing, real-time data management capability that includes collecting, storing, processing and analyzing of huge sets of data from IOT, mobile, enterprise, and 3rd party systems. The primary focus will be developing optimal solutions, then implementing systems to maintain, implement, and monitor them where ensuring high data quality is critical to mission performance. Candidate will leverage deep knowledge and experience to provide technical leadership for the team, take ideas from zero to completion, and provide the bridge between raw data and actionable business insights. We value high potential, high performing candidates who exhibit passion about leveraging high quality operational data and software engineering to improve workflows on the front line of business.

Responsibilities

  • Work closely with Product Owner and Product Management team to determine best way to architect our Database infrastructure to implement desired Requirements/Solutions.
  • Work closely with business stakeholders to identify needed data needs
  • Work closely with technical lead stakeholder to ensure architected solution integrates into client target architecture/systems.
  • Architect and help implement robust and rich data model(s) to ensure:
    • Data can be stored securely
    • Data can be classified/modified dynamically upon ingestion for effective authorization and to support the data model structure.
    • Data can be retrieved through APIs with high performance and high reliability, using rich query structure.
  • Architect and deploy on-prem as well as cloud-based database structures that can scale with high-reliability and high-availability.
  • Develop database solutions by designing proposed system; defining database physical structure and functional capabilities, security, back-up, and recovery specifications.
  • Maintain database performance and tune databases for fast analysis by identifying and resolving production and application development problems; calculating optimum values for parameters; evaluating, integrating, and installing new releases.
  • Work hands on to do both development as well as peer reviews of source code and then develop automated methods for data validation through unit and integration tests.

Qualifications

Bachelor’s degree in Computer Science or a related field.

  • Experience working with large IOT structured and unstructured datasets including streaming video, RF and indoor location data; real-time enterprise workflow and transactional datasets; and On-board/Off-board, Mobile, Intelligent Edge, Enterprise, and Cloud computing environments
  • Experience building ERP pipelines to warehouse, retail, and healthcare Systems-of-Record and onboarding real-time 3rd party data from DaaS providers or control systems
  • Proven track record of building an operational data environment to maintain highly reliable, highly available, 24x7, real-time data pipelines from owned, partner, and 3rd party sources
  • 5+ years of experience with building and shipping highly scalable distributed systems on cloud platforms and delivering large projects independently
  • Minimum of 5 years of experience working with SQL/RDBMS
  • 5+ years of experience of scripting with languages like Java and Scala.
  • 5+ years of experience working in a Hadoop environment
  • 3+ years of experience with Hive, Impala
  • 3+ years of experience with Spark, Spark SQL and Dataframes.
  • Some experience with machine learning using either python or spark ML is a plus.
  • Hands on experience with Elasticsearch or Cassandra is a plus
  • Experience working with Cloudera Hadoop (CDH) platform is a plus.
  • Experience working in AWS environment is a plus.
  • Experience working with streaming service like Kafka is a plus.
  • Experience with REST Web Services is a plus.
  • Some experience with QA automation is a plus.

 
     
Minimize

Facebook

Minimize