fbpx

Big Data Engineer

Accenture is a global management consulting, technology services and outsourcing company, with more than 305,000 people serving clients in more than 120 countries.  Combining unparalleled experience, comprehensive capabilities across all industries and business functions, and extensive research on the world’s most successful companies, Accenture collaborates with clients to help them become high-performance businesses and governments.  The company generated net revenues of US$30.0 billion for the fiscal year ended Aug. 31, 2014.  Its home page is www.accenture.com.

Accenture,
3 weeks ago
Big Data Engineer
Accenture,

 

WARSAW

What will you do?

You will participate in projects focused on Big data systems. The technology stack you will have contact with: Hive, Impala, Spark, NiFi, HBASE, HDFS, Kafka, Kudu, Ranger and other.

What we expect:

  • Technical university degree, especially with IT, Telecommunications, Mathematics studies
  • Knowledge of SQL and Python
  • Experience in Big data systems would be an asset
  • Knowledge of Linux, GIT
  • Availability and being open to learn

What we offer:

  • Local and global cross-sector projects with results visible on the market
  • Clearly defined career path
  • Training and development programme (certification)

 

CRACOW

Requirements

Experience level: Mid

  • Graduates or students of the last years of technical studies, such as: computer science, telecommunications, electronics and information systems, mathematics, quantitative methods
  • Knowledge of SQL language.
  • Experience with Big Data technologies, especially with selected tools such as Spark, NiFi, Kafka and other technologies in combination with Hadoop (HDP 2.6, HDP 3.x) – Hive, HDFS, Spark.
  • Knowledge of Python and/or Scala language basics.
  • Ability to work in Linux environment.
  • Availability and willingness to learn and develop.

Technologies

Necessary on this position:

  • Spark at Scala
  • Hadoop (Yarn, HDFS)
  • Hive

Nice-to-have:

  • Nifi
  • SQL
  • Docker
  • Java
  • SQL
  • Ambari
  • Hortonworks Data Platform stack

Project you can join

  • On the first assignment, you will become a member of the team which is providing tools to support sales activities by providing business a powerful analytics tools. Those tools are built at the top of the Hortonworks Data Platform and can be exposed in Docker containers.
  • You will be responsible for the implementation of Data Flows, which ingest data from different data sources, e.g., Kafka topics, SAP, S3 and store them in Hadoop, to process at the and by the Spark job.
  • As a Big Data Engineer, you will work closely to Data Science Team on the implementation of analytical models and machine learning algorithms.
  • During the project you will work closely with local team and some experts from Europe.
  • You will use Slack or Skype to communicate
  • Mostly you will work in AWS environment.

Work time division

  • Bug fixing 5%
  • New features 60%
  • Documentation 15%
  • Self-development 10%
  • Meetings 10%

How we code?

  • Version control: Git, Bitbucket
  • Code review: experienced Team member
  • IDE, Eclipse or other tools preferred by you.

How we test?

  • Unit test
  • Manual testing
  • Test automation
  • CI

How we manage our projects?

  • Methodology: Scrum
  • Who makes architectural decisions? Team and product owner
  • Who makes technology stack decisions? Team
  • Project management software: JIRA
  • Team line-up: Developers: 15

Toolset

  • Laptop
  • Additional monitor
  • Headphones
  • Operating system: Windows or Mac and Linux and system environments

Work environment

  • Tech supervisor
  • Open space
  • Separate rooms: meeting rooms
  • Office hours: 8:30 – 17:30

Sign up for our newsletter!