When you join Verizon

Verizon is one of the world’s leading providers of technology and communications services, transforming the way we connect across the globe. We’re a diverse network of people driven by our shared ambition to shape a better future. Here, we have the ability to learn and grow at the speed of technology, and the space to create within every role. Together, we are moving the world forward – and you can too. Dream it. Build it. Do it here.

What you’ll be doing...

As part of the Artificial Intelligence and Data Organization (AI&D), you will drive various activities including data engineering, data frameworks and data driven visualization interfaces to improve the efficiency, customer experience and profitability of the company. You will analyze marketing, customer experience and digital operations environments to build data pipelines, transform data into actionable intelligence via visualization interfaces. You will turn real time streaming raw data into usable data pipelines and build data tools and products for effort automation and easy data accessibility.

  • Design, build and support the digital twin and enterprise insights visualization platform that enables a combination of 3rd party and Verizon internal data on Big Data Visualization platform.
  • Gather requirements, assess gaps, and build roadmaps and architectures to help the analytics driven organization achieve its goals.
  • Work closely with Data Products and Analysts to ensure data quality and availability for analytical and simulation modelling.
  • Identify gaps and implement solutions for data security, quality, and automation of processes.
  • Design the enterprise digital twin data visualization and simulation platforms to build, document, test and implement new data visualization and analytics interfaces.
  • Collaborate in cross-functional teams to source new data, develop schema requirements, and maintain metadata.
  • Identify ways to improve data reliability, efficiency and quality.
  • Use data to discover tasks that can be automated.
  • Analyze existing data pipelines/SQL queries for performance improvement.

What we’re looking for...

You’ll need to have:

  • Bachelor’s degree or four or more years of work experience.
  • Six or more years of relevant work experience.
  • Experience in designing, building, and deploying production-level data pipelines using tools from Hadoop stack (HDFS, Hive, Spark, HBase, Kafka, NiFi, Oozie, Splunk etc.)
  • Experience with SQL databases and Change Data Capture.
  • Experience with Fullstack technologies like Java Springboot, NodeJS, along with open source data analytics tool sets like Druid, Superset.
  • Experience of Agile and DevOps methodologies.
  • Experience in programming in Java/Scala.
  • Experience in Visualization platforms like BI/Looker/Qlick.

Even better if you have one or more of the following:

  • A degree in Computer Science, Information Technology, Computer Engineering.
  • Experience with Cloud technologies (AWS, GCP, PCF, Docker, Kubernetes and application migration.
  • Experience with data simulation platforms.
  • Knowledge of Big Data/AI/ML architectures, solutions, trends, frameworks, to be able to troubleshoot issues, validate solutions, recommend and implement architectural improvements.
  • Good conflict resolution and negotiation skills.
  • Knowledge of telecom architecture.
  • Ability to effectively communicate through presentation, interpersonal, verbal and written skills.