What you’ll be doing...

Min Work experience required : 4 or more years of relevant work experience.

Roles & Responsibilities /Technology / ToolsRequired / Exposure :

  • Join our IT Application Security Team where you'll assist with data models for machine learning and work closely with the team to analyze tools for real time data processing. The person in this position would also be responsible for integerating application to Kafka data pipeline, should have exposure to all technology stacks, would help to design soultion to integerate applications to Kafka.
  • Expertise in Kafka, Pulsar, JMS, RabbitMQ.
  • Experience with big data technologies and statistical programming languages such as Python, R or Scala.
  • Experience with Micro Service technologies (development/ design) including Spring, Spring Boot, REST WebServices and Web technologies.
  • Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation.
  • Applied knowledge of Splunk, SPL and anomaly detection and visualization development.
  • Applied knowledge of data administration practices and approaches for data collection and ingest using Open Source tools such as Logstash and Kafka in a Hadoop ecosystem.
  • Possess knowledge of algorithms and data science
  • Bachelor’s degree in engineering, statistics, or mathematics

What we’re looking for...

Min Work experience required : 4 or more years of relevant work experience.

Roles & Responsibilities /Technology / ToolsRequired / Exposure :

  • Join our IT Application Security Team where you'll assist with data models for machine learning and work closely with the team to analyze tools for real time data processing. The person in this position would also be responsible for integerating application to Kafka data pipeline, should have exposure to all technology stacks, would help to design soultion to integerate applications to Kafka.
  • Expertise in Kafka, Pulsar, JMS, RabbitMQ.
  • Experience with big data technologies and statistical programming languages such as Python, R or Scala.
  • Experience with Micro Service technologies (development/ design) including Spring, Spring Boot, REST WebServices and Web technologies.
  • Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation.
  • Applied knowledge of Splunk, SPL and anomaly detection and visualization development.
  • Applied knowledge of data administration practices and approaches for data collection and ingest using Open Source tools such as Logstash and Kafka in a Hadoop ecosystem.
  • Possess knowledge of algorithms and data science
  • Bachelor’s degree in engineering, statistics, or mathematics

When you join Verizon...

You’ll be doing work that matters alongside other talented people, transforming the way people, businesses and things connect with each other. Beyond powering America’s fastest and most reliable network, we’re leading the way in broadband, cloud and security solutions, Internet of Things and innovating in areas such as, video entertainment. Of course, we will offer you great pay and benefits, but we’re about more than that. Verizon is a place where you can craft your own path to greatness. Whether you think in code, words, pictures or numbers, find your future at Verizon.