KEMBAR78
Prathap | PDF | Databases | Microsoft Sql Server
0% found this document useful (0 votes)
691 views5 pages

Prathap

This document provides details about Prathap Kumar's experience and qualifications as a Senior Snowflake Developer. It includes his contact information, over 9 years of IT experience including extensive experience with Snowflake and migrating databases to Snowflake. It also lists his technical skills and work history demonstrating his experience designing Snowflake architectures, developing ETL pipelines, writing SQL queries, performance tuning, and more.

Uploaded by

Vijay V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
691 views5 pages

Prathap

This document provides details about Prathap Kumar's experience and qualifications as a Senior Snowflake Developer. It includes his contact information, over 9 years of IT experience including extensive experience with Snowflake and migrating databases to Snowflake. It also lists his technical skills and work history demonstrating his experience designing Snowflake architectures, developing ETL pipelines, writing SQL queries, performance tuning, and more.

Uploaded by

Vijay V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Prathap

Senior Snowflake Developer


Email: kumarprathapj@gmail.com
Phone: 909-683-4060
SUMMARY
 Qualified professional with 9+ years of extensive IT Experience especially in Data Warehousing and Business
Intelligence applications in Financial, Retail, Telecom, Insurance, HealthCare and Technology Solutions
industries.
 Strong experience in migrating other databases to Snowflake.
 Participate in design meetings for creation of the Data Model and provide guidance on best data architecture
practices
 Participates in the development improvement and maintenance of snowflake database applications.
 Evaluate Snowflake Design considerations for any change in the application.
 Build the Logical and Physical data model for snowflake as per the changes required.
 Define roles, privileges required to access different database objects.
 Define virtual warehouse sizing for Snowflake for different type of workloads.
 Design and code required Database structures and components.
 Build the Logical and Physical data model for snowflake as per the changes required.
 Worked with cloud architect to set up the environment.
 Experiencing with mainframe technology including JCL, VSAM, Cobol
 Define virtual warehouse sizing for Snowflake for different type of workloads.
 Design and code required Database structures and components.
 Ensure incorporation of best practices and lessons learned from prior projects.
 Coding for Stored Procedures/ Triggers.
 Experience in creating end user reports (like Cross tab, list, chart etc.) using Cognos Report Studio and Cognos
Query Studio.
 Experience with Guidewire ClaimCenter and Contact Manager 8.0.3 product versions.
 Implement performance tuning where applicable.
 Designs batch cycle procedures on major projects using scripting and Control.
 Develop SQL queries SnowSQL
 Develop transformation logic using snow pipeline.
 Optimize and fine tune queries
 Performance tuning of Big Data workloads.
 Have good Knowledge in ETL and hands on experience in ETL.
 Write highly tuned and performant SQLs on various DB platform including MPPs.
 Develop highly scalable, fault tolerant, maintainable ETL data pipelines to handle vast amount of data.
 Build high quality, unit testable code.
 Operationalize data ingestion, data transformation and data visualization for enterprise use.
 Define architecture, best practices and coding standards for the development team.
 Provides expertise in all phases of the development lifecycle from concept and design to testing and operation.
 Work with domain experts, engineers, and other data scientists to develop, implement, and improve upon
existing systems.
 Deep dive into data quality issues and provide corrective solutions.
 Interface with business customers, gathering requirements and deliver complete Data Engineering solution.
 Ensure proper data governance policies are followed by implementing or validating Data Lineage, Quality
checks, classification, etc.
 Deliver quality and timely results.
 Mentor and train junior team members and ensure coding standard are followed across the project.
 Help talent acquisition team in hiring quality engineers.

Educations:
Bachelors in Electronics and Communications Engineering at Gitam University – 2012
Masters in Masters in Computer Science Engineering at SVU University --2016

TECHNICAL SKILLS:
Cloud Technologies:  Snowflake, SnowSQL Snow pipe AWS.
Spark, Hive:  LLAP, Beeline, Hdfs, MapReduce, Pig, Sqoop, HBase, Oozie, Flume
Reporting Systems:  Splunk
Hadoop Distributions:   Cloudera, Hortonworks
Programming Languages:  Scala, Python, Perl, Shell scripting.
Data Warehousing: Snowflake, Redshift, Teradata
DBMS:  Oracle, SQL Server,MySql,Db2
Operating System:  Windows, Linux, Solaris, Centos, OS X
IDEs:  Eclipse, NetBeans.
Servers:  Apache Tomcat

PROFESSIONAL WORK EXPERIENCE


Client: Universal Orlando - Orlando, FL (Nov 2020 - Till
Date)
Role: Sr. Snowflake Developer

Responsibilities
 Created tables and views on snowflake as per the business needs.
 Used Tab Jolt to run the load test against the views on tableau.
 Created reports on Metabase to see the Tableau impact on snowflake in terms of cost.
 Participated in sprint planning meetings, worked closely with manager on gathering the requirements.
 Designed and implemented efficient data pipelines (ETLs) in order to integrate data from a variety of
sources into Data Warehouse.
 Worked extensively in Mainframe - COBOL, JCL, DB2, IMS, Control-M and VSAM .
 Developing ETL pipelines in and out of data warehouse using Snowflakes SnowSQL Writing SQL queries
against Snowflake
 Performed data quality issue analysis using SnowSQL by building analytical warehouses on Snowflake.
 Implemented data intelligence solutions around Snowflake Data Warehouse.
 Migrating data from source systems such as mainframe, RDBMS, files into a data lake solution
 Perform various QA tasks if necessary.
 Snowflake data platform deployed at High Tech - Networking (Juniper Networks), Retail (Netezza migration
PetCo), Construction (WebCor), Many Snowflake POCs - SaaS (ServiceNow), Parking Data (Parkimon)
 Power BI on MS SQL (Ohio Medicals), Power BI & Tableau on Snowflake (WebCor), Cloudera Hadoop & SAP
Vora POC (ServiceNow)

Client: Toyota - Plano, TX (Oct 2017 - Nov 2020)


Role: Sr. Snowflake Developer

Responsibilities
 Involved in End-to-End migration of 800+ Object with 4TB Size from Sql server to Snowflake.
 Data moved from Sql Server  Azure snowflake internal stageSnowflake. with copy options.
 Published reports and dashboards using Power BI.
 Created roles and access level privileges and taken care of Snowflake Admin Activity end to end.
 Converted 230 views query’s from Sql serversnowflake compatibility.
 Retrofitted 500 Talend jobs from SQL Server to Snowflake.
 Worked on SnowSQL and Snow pipe
 Converted Talend Joblets to support the snowflake functionality.
 Created Dax Queries to generated computed columns in Power BI.
 Involved in creating new stored procedures and optimizing existing queries and stored procedures.
 Created data sharing between two snowflake accounts (Prod—Dev).
 Migrate the database 500 + Tables and views from Redshift to Snowflake.
 Redesigned the Views in snowflake to increase the performance.
 Unit tested the data between Redshift and Snowflake.
 Worked on Guidewire Claim Center and Contact Manager data model and database enhancements to
support PI Claims.
 Creating Reports in Looker based on Snowflake Connections.
 Validation of Looker report with Redshift database.
 Created Dashboard for the presentation using most of the Cognos functionality (such as Conditional
Formatting, Drill through and Dashboards etc.)
 Reporting testing - creating test cases, scripts and executing them. Tested the fixed reports in Cognos 8
environment as well as the Performance of the reports in Cognos 8 environment.
 Created data sharing out of snowflake with consumers.
 Worked on replication and data mapping of ODS tables to Guidewire ClaimCenter typelists and entities.
 Validating the data from SQL Server to Snowflake to make sure it has Apple to Apple match.
 Consulting on Snowflake Data Platform Solution Architecture, Design, Development and deployment
focused to bring the data driven culture across the enterprises.
 Driving replacing every other data platform technology using Snowflake with lowest TCO with no
compromise on performance, quality and scalability.
 Building solutions once for all with no band-aid approach.
 Played key role in testing Hive LLAP and ACID properties to leverage row level transactions in hive.
 Volunteered in designing an architecture for a dataset in Hadoop with estimated data size of 2PT/day.
 Integrated Splunk reporting services with Hadoop eco system to monitor different datasets.
 Used Avro, Parquet and ORC data formats to store in to HDFS.
 Developed workflow in SSIS to automate the tasks of loading the data into HDFS and processing using hive.
 Develop alerts and timed reports Develop and manage Splunk applications.
 Provide leadership and key stakeholders with the information and venues to make effective, timely
decisions.
 Establish and ensure adoption of best practices and development standards.
 Communicate with peers and supervisors routinely, document work, meetings, and decisions.
 Work with multiple data sources.
 Designed and Created Hive external tables using shared Meta-store instead of derby with partitioning,
dynamic partitioning and buckets.
 Implemented Apache PIG scripts to load data to Hive.
 Developed Python scripts to take backup of EBS volumes using AWS Lambda and Cloud Watch
 Creating scripts for system administration and AWS using languages such as BASH and Python.
 Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process implementation- using
Jenkins along with Python and Shell scripts to automate routine jobs.
 Created Pre-commit hooks in Python/shell/bash for authentication with JIRA-Pattern Id while committing
codes in SVN, limiting file size code and file type and restricting development team to check-in while code
commit.
 Integration of Puppet with Apache and developed load testing and monitoring suites in Python.
 Developed microservice on boarding tools leveraging Python and Jenkins allowing for easy creation and
maintenance of build jobs and Kubernetes deploy and services.

Environment: Snowflake, Redshift, SQL server, AWS, AZURE, TALEND, JENKINS and SQL

Client: Cisco – SanJose, CA (Dec 2016 – Sep 2017)


Role: Snowflake/NiFi Developer

Responsibilities:
 Evaluate Snowflake Design considerations for any change in the application.
 Build the Logical and Physical data model for snowflake as per the changes required.
 Define roles, privileges required to access different database objects.
 Define virtual warehouse sizing for Snowflake for different type of workloads.
 Design and code required Database structures and components.
 Published Power BI Reports in the required originations and Made Power BI Dashboards available in Web
clients and mobile apps
 Build the Logical and Physical data model for snowflake as per the changes required.
 Worked with cloud architect to set up the environment.
 Involved in Migrating Objects from Teradata to Snowflake.
 Created Snowpipe for continuous data load.
 Used COPY to bulk load the data.
 Created internal and external stage and transformed data during load.
 Used FLATTEN table function to produce lateral view of VARIENT, OBECT and ARRAY column.
 Worked with both Maximized and Auto-scale functionality.
 Used Temporary and Transient tables on diff datasets.
 Cloned Production data for code modifications and testing.
 Shared sample data using grant access to customer for UAT.
 Time traveled to 56 days to recover missed data.
 Developed data warehouse model in snowflake for over 100 datasets using were Scape.
 Heavily involved in testing Snowflake to understand best possible way to use the cloud resources.
 Developed ELT workflows using NiFI to load data into Hive and Teradata.
 Worked on Migrating jobs from NiFi development to Pre-PROD and Production cluster.
 Scheduled different Snowflake jobs using NiFi.
 Used NiFi to ping snowflake to keep Client Session alive.
 Worked on Oracle Databases, RedShift and Snowflakes
 Define virtual warehouse sizing for Snowflake for different type of workloads.
 Major challenges of the system were to integrate many systems and access them which are spread across
South America; creating a process to involve third party vendors and suppliers; creating authorization for
various department users with different roles.

Environment: Snowflake, SQL server, AWS and SQL

Client: Infotech Enterprices – Hyderabad, India (Sep 2013 – June 2015)


Role: Big Data Engineer

Responsibilities:
 Primarily worked on a project to develop internal ETL product to handle complex and large volume
healthcare claims data. Designed ETL framework and developed number of packages.
 Performing Data source investigation, developed source to destination mappings and data cleansing while
loading the data into staging/ODS regions.
 Involved in various Transformation and data cleansing activities using various Control flow and data flow
tasks in SSIS packages during data migration.
 Applied various data transformations like Lookup, Aggregate, Sort, Multicasting, Conditional Split, Derived
column etc.
 Played key role in testing Hive LLAP and ACID properties to leverage row level transactions in hive.
 Volunteered in designing an architecture for a dataset in Hadoop with estimated data size of 2PT/day.
 Integrated Splunk reporting services with Hadoop eco system to monitor different datasets.
 Used Avro, Parquet and ORC data formats to store in to HDFS.
 Developed workflow in SSIS to automate the tasks of loading the data into HDFS and processing using hive.
 Develop alerts and timed reports Develop and manage Splunk applications.
 Provide leadership and key stakeholders with the information and venues to make effective, timely
decisions.
 Establish and ensure adoption of best practices and development standards.
 Communicate with peers and supervisors routinely, document work, meetings, and decisions.
 Work with multiple data sources.
 Designed and Created Hive external tables using shared Meta-store instead of derby with partitioning,
dynamic partitioning and buckets.
 Implemented Apache PIG scripts to load data to Hive.
 Developed Mappings, Sessions, and Workflows to extract, validate, and transform data according to the
business rules using Informatica.
 Querying, creating stored procedures and writing complex queries and T-SQL join to address various
reporting operations and also random data requests.
 Performance monitoring and Optimizing Indexes tasks by using Performance Monitor, SQL Profiler,
Database Tuning Advisor, and Index tuning wizard.
 Acted as point of contact to resolve locking/blocking and performance issues.
 Wrote scripts and indexing strategy for a migration to Confidential Redshift from SQL Server and MySQL
databases
 Worked on AWS Data Pipeline to configure data loads from S3 to into Redshift.
 Used JSON schema to define table and column mapping from S3 data to Redshift.
 Wrote indexing and data distribution strategies optimized for sub-second query response.
 Developed microservice on boarding tools leveraging Python and Jenkins allowing for easy creation and
maintenance of build jobs and Kubernetes deploy and services.
 Automated Release Notes using Python and Shell scripts.
 Used Python scripts to configure the WebLogic application server in all the environments.

Client: Tech Aspect Solutions Pvt Ltd – Hyderabad, India (Aug 2012 – Aug 2013)
Role: ETL Developer

Responsibilities:

 Developed Logical and Physical data models that capture current state/future state data elements and data
flows using Erwin 4.5.
 Responsible for design and build data mart as per the requirements.
 Extracted Data from various sources like Data Files, different customized tools like Meridian and Oracle.
 Extensively worked on Views, Stored Procedures, Triggers and SQL queries and for loading the data
(staging) to enhance and maintain the existing functionality.
 Done analysis of Source, Requirements, existing OLTP system and identification of required dimensions and
facts from the Database.
 Created Data acquisition and Interface System Design Document.
 Designed the Dimensional Model of the Data Warehouse Confirmation of source data layouts and needs.
 Extensively used Oracle ETL process for address data cleansing.
 Developed and tuned all the Affiliations received from data sources using Oracle and Informatica and
tested with high volume of data.
 Responsible for developing, support and maintenance for the ETL (Extract, Transform and Load) processes
using Oracle and Informatica PowerCenter.
 Created common reusable objects for the ETL team and overlook coding standards.
 Reviewed high-level design specification, ETL coding and mapping standards.
 Designed new database tables to meet business information needs. Designed Mapping document, which is
a guideline to ETL Coding.
 Used ETL to extract files for the external vendors and coordinated that effort.
 Migrated mappings from Development to Testing and from Testing to Production.
 Performed Unit Testing and tuned for better performance.
 Created various Documents such as Source-to-Target Data mapping Document, and Unit Test Cases
Document.

You might also like