Leveraging over 14 years of data analysis, financial forecasting, business strategy, people and project management, I apply business analytics and statistical programming to develop predictive models, growth strategies, and data-driven insights. I am a creative problem solver driving value through the deployment of interactive dashboards, robotic automation, and reproducible reporting.
Centene Corporation: Data Scientist III
Supporting Health Net’s Strategic Insights team within Commercial Operations, my team derives data-driven insights for our Product Performance and Sales organizations. We utilize Microsoft Excel, the R programming language, and professional RStudio tools to analyze sales data, healthcare claims, and publicly-available competitor data. We communicate impactful findings to leadership and deploy predictive tools that increase sales and improve healthcare product quality while lowering costs.
- Working with Underwriters and Actuaries, the Strategic Insights team designs custom predictive algorithms and deploys (to production) R Shiny applications using statistical programming and machine learning. We experiment, test, iterate, and solve for the following problems: Multi-variate regression and classification, clustering, feature engineering, automation, and time-series forecasting.
- As co-organizer of Centene’s internal R User Group, I help to coordinate monthly learning sessions on best practices and data science modeling work flows, and keep Centene’s R users apprised of new data analysis, reporting, and visualization techniques.
- Led series of R beginners trainings for over 150 analysts, data scientists, actuaries, underwriters, and business intelligence professionals.
- My daily toolkit includes the following tools: R, RStudio Desktop, Microsoft Excel, Shiny, GitLab, StorageGRID (AWS S3), H2O, SQL, JupyterHub, RStudio Connect, Oracle, and SAP BI.
KPMG: Senior Manager
Managed multi-disciplinary valuation and M&A engagement teams, led internal and external communications, designed complex fixed asset valuation models, hired and trained new and experienced professionals, and researched best practices for KPMG’s Economic & Valuation Services (EVS) practice. Leading a team of Managers, Senior Analysts, and Associates, I provided conceptual and technical guidance on data collection & exploration, data analysis, financial modeling, valuation theory, report writing, and forecasting methodologies.
- Led initiatives with KPMG Advisory and Tax partners to develop new service offerings for embedded software studies, asset risk, Federal and State tax savings, fixed asset management, and cost segregation studies.
- Presented and participated on panel at KPMG’s annual West Coast Energy Tax Share Forum on the topics of capital investments and embedded software studies that would result in tax savings for public and private businesses.
As a Supervisor in the Finance & Risk organization on the Capital Recovery & Analysis team, I strategized with cross-functional teams to analyze, budget, forecast, and maximize revenues of PG&E’s separately funded projects - 15 special projects in pipeline safety, IT, and electric distribution, exceeding annual revenues of $350 million.
- Led team of Senior Analysts and Experts in collecting historical capital expenditure by LOB for PG&E’s rate base and special projects, cleaning and sanitizing data, forecasting LOB performance, and driving monthly budget-to-actual variance analyses.
- Spearheaded a 12-month process improvement initiative to streamline the company’s P&L forecasting and performance tracking models, resulting in a cross-functional tool with enhanced controls and KPI dashboards.
- Drafted data request responses for regulators, ratepayer advocacy groups, external auditors (Deloitte), and designed financial templates and controls to improve response turnaround time.
As a Manager and financial modeler in EY’s Valuation, Modeling & Economics practice, I produced valuation models and fair market value studies through Microsoft Access and Excel to support M&A transactions, purchase price allocations, goodwill impairment analyses, obsolescence studies, and restructurings. I managed scope and resources to execute over 100 valuation engagements consisting of financial data collection (typically 50K to 750k records) from disparate global data sources, data manipulation, modeling, and leading client presentations on our methodologies and findings.
- Developed and distributed valuation and embedded software models for EY’s national Capital Equipment practice (i.e., Microsoft Access and Excel models) and regularly modified models as updates were made to US GAAP, IFRS, regulatory standards, BLS indices, and State / Local Tax regulations.
- Developed procedure documents and training for new and experienced analysts in the U.S.A., Brazil, Canada, India, and Russia, covering the basics of MS Access (relational table design) and MS Excel (modeling best practices).
Scatter Podcast is an analytics and data science podcast that I launched in 2019 to share career tips and insights from data science leaders and practioners.
As part of UC Irvine’s MS in Business Analytics Capstone Program, I was on a small team of graduate students that had the opportunity to work on a real-world research project for Experian. Our team researched machine learning algorithms to predict consumer likelihood of filing for bankruptcy. My primary contribution was the development of a neural network model built in R (Keras and TensorFlow), interpreting “black box” results, and leading presentations with Experian's Chief Data Scientist and UC Irvine faculty.
TBWA Chiat Day
Supporting TBWA Chiat Day’s Global Data organization, I researched consumer trends, performed behavioral and statistical analyses, collected large structured and unstructured data from social media platforms, and derived data-driven insights for the advertising agency over an eight-month period. Two of my key contributions included the following: 1) Conceptualized and built a stock price notification system as a proof of concept for the agency’s existing client using data from S&P Capital IQ and labeling triggering events for price fluctuations, and 2) Analyzed San Francisco geospatial and transportation data in R and developed interactive maps for a new product prototype being pitched to a ride-share corporation.
Orange County R Hackathon - 2019 Winner
In 24 hours, my team relied on the R programming language and Tableau to explore, analyze, and create predictive models using publicly available California water source data, health data, and census records. With over 80% accuracy, we developed a multivariate adaptive regression spline (MARS) model to predict the percentage of poor-health residents from drinking water contaminates. The variables of highest importance included chemical levels of arsenic, nitrates, and uranium in drinking tap water. The real surprise was discovering how these contaminates affect low birthweight and child poverty percentages at the county level here in California.