I'm a seasoned Generative AI and Machine Learning Engineer with deep expertise in developing and deploying state-of-the-art AI solutions. With over seven years in the field, my journey spans across leading roles in AI research, generative model development, and ML operations with a focus on high-impact applications in the telecommunications, retail, and finance sectors.
Currently, as a Lead Analyst in Generative AI Engineering at NTT DATA Europe & Latam, I specialize in multi-agent architectures and advanced LLM integration, leveraging tools like OpenAI, Claude, LlamaIndex, and LangChain. My role includes designing and fine-tuning agent-based AI solutions to enhance relational database queries, automate test case generation, and optimize user insights through complex data processing on high-volume datasets. In previous roles at organizations like Microsoft LatAm, Porter Novelli, and Softtek, I developed solutions from predictive modeling to sentiment analysis on a large scale, making significant contributions to data-driven decision-making processes.
I am proficient with a wide array of tools and frameworks such as Python, TensorFlow, PyTorch, AWS, Azure, GCP, and numerous MLOps tools, ensuring reliable, scalable, and high-performing AI and ML systems. My commitment to staying ahead of the latest developments in AI is fueled by my hands-on experience in generative AI, multi-modal prompting, and continuous model training and deployment across various cloud platforms.
Driven by a passion for innovation, I enjoy collaborating with cross-functional teams to craft solutions that redefine possibilities in AI. My goal is to continue pushing the boundaries of AI, transforming data into actionable insights that can drive strategic growth and efficiency for organizations worldwide.
Integration of multi-agent architectures using LangChain for the implementation of LLM models.
Designed multi-agent architectures, prompts, and agent calibration using OpenAI, Claude, LlamaIndex, and open-source models, coordinating with teams and clients to monitor and validate end-to-end results.
Developed a natural language query system for relational databases to extract tailored insights.
Problem-solving through graph and vectorized databases using generative AI models and prompt engineering.
Developed automated pipelines for test case generation using LLM models, vectorized databases, and graph databases through multi-agent architectures with LangChain and LlamaIndex.
Conducted database analysis using a multi-agent architecture with LangChain and LangGraph, enabling query generation and execution on SQL databases through natural language processing using GPT-4.
Executed analysis on up to 15GB datasets for AT&T, performing EDA and generating variables to identify correlations between signal strength, node and antenna locations, and user proximity, optimizing insights on signal quality relative to user location.
Generative AI Engineer with one year of hands-on experience in applying and developin solutions in the AI domain, with a special focus on Natural Language Processing, Large Language Models (LLMs), Generative Adversarial Networks (GANs), Hugging Face Transformers Architectures, Familiarity with Retrieval-Augmented Generation (RAGs) and Multi-Task Language.
Experience in the usage of tools like TensorFlow, PyTorch, GPT Architectures, OpenAI API, LangChain, Pinecone, AWS SageMaker, DALL-E, Stable Difussion and others. Continuously updating on the lastest trends and technologies in the field.
• As Senior Machine Learning Engineer at CochairAI, I led the development of an advanced recommendation system for chambers of commerce, integrating LLMs, RAGs, NLP, and TensorFlow Recommenders. I devised solutions using Pinecone, scikit-learn, Surprise, and BERT, achieving personalized recommendations with an accuracy range of 0.86 to 0.91. I implemented OpenAI's Ada model to further refine accuracy. The process included rigorous testing with FastAPI and Streamlit for a user-friendly interface. The system, deployed via AWS SageMaker, proved to be robust and scalable, with high-accuracy results documented in the CochairAI repository.
• Develop and integrate NLP and LLM techniques into recommender systems to enhance their ability to understand user intent and extract meaningful information from user interactions.
• Implement NLP and LLM-powered recommender systems to provide users with personalized and relevant recommendations, leading to increased user engagement, satisfaction, and conversion rates.
• Designed and implemented transformer-based recommender systems using TensorFlow and PyTorch, achieving significant improvements in recommendation accuracy and relevance.
• Developed and integrated RAG models into recommender systems to effectively retrieve relevant information from external sources, enhancing the system’s knowledge base and personalization capabilities.
• Continuously evaluated and refined transformer-based and RAG-based recommender systems using TensorFlow and PyTorch evaluation tools and metrics, ensuring optimal performance and alignment with business objectives.
• Proficient in utilizing various LLM-centric APIs, including Hugging Face, LangChain, and others, to effectively harness the power of LLMs for a range of tasks.
Key activities:
* Perform end-to-end data science process to get, prepare, train, test, tune and deploy ML models in AWS and Open-Source tools.
* Development and deployment of data science artifacts into production workflows and business applications: NLP, Retail and Finance Sectors.
* Build, deploy, configure and maintain solutions based in machine learning systems.
* Prototype and demonstrate solutions for client, in local and customer environments.
* Implement end-to-end data science solutions, monitoring models, perform testing and generate endpoints to deploy automate solutions.
Software and Tools:
Programming Languages: Python, R, SQL, JavaScript.
Data Visualization: PowerBI, Tableau, Google Data Studio, Quiksight, Metabase, Looker.
Machine and Deep Learning: scikit-learn, xgboost, tensorflow, Anaconda, H2O, Numpy, Pandas,
Data Analysis, EDA, FE, Regression-Classification Models, Time Series, Pycaret, Gensim, Spacy, NLTK
DevOps and MLOps tools such as: AWS, Docker, Kubernates and Airflow, S3, Lambda Functions, EC2,
AWS Sage Maker, Cloudformation.
Other tools: GitHub, Jira, Bitbucket, BASH, RCA (Root Cause Analysis), Agile Methodologies (SCRUM).
• Related tasks to dashboards elaboration, data analysis, business intelligence, predictive analysis, time series analysis and market trending for influencers, risk analysis and pricing forecast.
• I developed and delivered the first BI Platform to integrate and harmonize information from influencers for Sony Electronics Latam Hub.
• I built and deployed Sentiment Classifier using SVM and KNM to identify the tonality of news about Microsoft Latam and Sony Electronics Latam, more than 60,000 combinations where used on 10,000 keywords related to the brand. The final model was deployed on Microsoft Azure and Google Cloud Platform respectively.
• Crear y configurar bases de datos relacionales.
• Análisis Estadístico de Bases de Datos. (Automatización de KPIs)
• Administración y Depuración de Bases de Datos (Access, CSV, SQL).
• Detección de Patrones de Comportamiento en Escenarios Financieros.
• Diseño de Dashboards Dinámicos con Programación: Excel, Python y R.
• Generar Presentaciones Ejecutivas e Indicadores Estadisticos de Operacion.
• Realizar Mejoras a los Procesos Administrativos y Optimizacion de Recursos.
• Uso de Macros, Pandas, NumPy, SciPy y MatplotLib para análisis de Bases de Datos en Python, Excel y R.
• Presentación de Gráficos Dinámicos para descripción y seguimiento a KPIs.
• Implementación de Modelos de Optimización Matemática (Programación Lineal y No Lineal, Modelos de Inventarios, Análisis de Flujo en Redes y Modelos de Transporte).
• Gestión Estadística de Bases de Datos (R Studio, JetBrains, Spyder, Anaconda, Excel, Orange Canvas).
• Excel Avanzado (Macros, Concatenar, Creación y Edición de Tablas Avanzadas, Fórmulas Estadísticas y Probabilisticas).
* Apply SQL, R, SAS & Python softwares to manage, manipulate, clean and analyze data.
* Design and implement different approaches to analyze and solve predictive modeling problems for large-scale and real-time data flows.
* Develop statistical models and computational solutions using largescale data manipulation, statistical analyses, data mining and data visualization.
* Hands-on experience applying Big Data, Statistical Analysis, Predictive Models Design, Implementation of Machine Learning and Artificial Intelligence Models.
* (Using R, Python, JavaScript, SAS, Power BI, PL/MySQL, Shiny, HTML, CSS, D3.js, Google Cloud Platform, VBA for Microsoft Excel, Databricks, Azure, Google Datastudio, Redash, Metabase, Tableau, Adjust)
As Supplier Technical Assistance Engineer I am responsible for ensuring the parts and systems delivered from our external suppliers are the highest possible quality and delivered on time in the required quantities. I provide support and contribute across the global organization.
My essential job functions are:
* Apply quality tools and techniques at supplier's manufacturing facilities to ensure they consistently produce high quality products.
* Work closely with the Purchasing and Product Development to select the most capable and efficient suppliers.
* Resolve manufacturing and quality issues at supplier's facilities to guarantee they meet Ford requirements.
* Ensure supplier's manufacturing facilities can deliver sufficient quantity of parts to meet customer's demands.
As Quality Resident Engineer (Ford Motor Company) I am responsible to ensure the quality of the parts produced in the plant floor and guarantee they meet all customer expectations. Also I must quickly and effectively take action when non-conformances are found in order to maintain the most recent interpretations of the quality system requirements and therefore ensure that documented practices meet the true interpretation of the quality management system standards.
My essential job functions are:
* Evaluate and improve the process in manufacturing systems. Maintain reliable and safe manufacturing systems while improving production rates, efficiency, cost, and changeover.
* Improve process capability and production volume while maintaining and improving quality standards.
* Develop and submit PPAP documents for new product launches to meet customer requirements.
* Determine when process stability and capability studies should be performed on existing processes, review and analyze results and recommend changes to processes based on findings and perform follow up to verify effectiveness.
* Participate in the Advanced Product Quality Planning activities to determine appropriate use of existing and new measurement systems during new tool or process design and startup.
I am responsible for completing the bill-of-materials for customer orders that are not supported by the standard product offering and extensively reviewing all components on an order to ensure form, fit, and function requirements are met.
Essential Job Functions
* Through guided interaction work with appropriate Sales Engineering, Product Engineering, or Local Business Partners (LBPs) to select other components if form, fit, and function requirements are not met.
* Complete an extensive component level order review to ensure correct pressures and temperatures (ANSI seat leak rates, valve stem loads and shaft torque limits), correct actuator sizing, and correct functioning of accessories.
* Confirm material combinations from product knowledge/experience for special order constructions.