About the role
We are looking for a strong Data engineer who has experience working with data of various types. You will help find meaning out of raw data and convert this info into a meaningful actionable insight for our customers.
You will be building and maintaining different ETL data pipelines and services to deliver the data across services. The candidate should be experienced with exploratory data analysis, machine learning, network graph modeling, information insight, data mining, data driven analytics and statistical methods.
Responsibilities
-
Building ETL pipelines for processing the raw data and converting them into consumable data for other teams.
- Performing data mining, statistical analysis, predictive modeling and simulation, and forecasting for the development of solutions
- Establishing and implementing end-to-end proof of concepts
- Performing enterprise level software development, integration, and implementation
using data science or big data technologies.
- Collaborating with other disciplines across engineering, UX, and Product to help
develop insightful solutions.
- Write data APIs to consume the derived data by other teams.
- Build high availability and extremely reliable transactional systems.
- Own and manage all the phases of the software development lifecycle (planning,
design, implementation, deployment and support)
- Write scripts to do data-mining from public data sources.
- Prepare datasets for training and statistical modeling.
- Collaborate effectively with the rest of the team to design and launch new features
- Deliver on rapid implementation schedules to build web functionality that is fast,
scalable, and upholds smart development goals and principles.
- Maintain code integrity, quality and ensure responsiveness of applications.
Technical Requirements:
-
A 2+ years of relevant work experience and a Computer Science or related technical discipline is required.
- Experience in the design and development of data science software applications
- Experience working with managing data workflows and writing ETL pipelines in
Python,R and using Data Science libraries such as scikit-learn, MlLib, Pandas
- Experience with Exploratory Data Analysis including wrangling, grooming,
transformation, and analysis.
- Experience with supervised and unsupervised machine learning techniques
- Knowledge of code versioning tool-Git.
- Knowledge of cloud computing platforms, like Amazon Web Services.
- Strong Knowledge of database systems like Mysql.
- Interest and ability to learn new technology stacks as needed.
- Good problem solving skills and willing to learn new technologies.
- Passion for developing moats by deriving insightful data points.
- Independent, dedicated, and able to deliver production ready code with minimal
guidance.
- Prior startup experience is a bonus.
Cultural Requirements:
- H2O - Humble Hustling Operators.
- Customer over Company.
- Team over Me.
- Add Everyday - Learning. Growth. Value.
- Superb Form. Great Substance.
- Transparency.
What we offer today*:
- An early-bird seat at a potential industry-defining SaaS company.
- A collaborative, open work environment that fosters ownership, creativity, and agility.
- Immense learning opportunity to grow yourself in a very small duration.
- Unlimited paid leaves.
- Competitive Salary.
- Stock Options - Everyone at Krayo gets it.
- *Our perks are only going to go up... :).
Drop your resume to
careers@krayo.io and we will contact you.