Tim Gilbert

Tim has 2 and half decades of experience in tech and data science, and has a passion for solving challenging problems. He loves using AI and creating custom algorithms to automate time-intensive tasks and save people hours of work every day.

Preferred tools:

  • Python
  • Pandas
  • Keras
  • MySQL
  • AWS
  • Ubuntu
  • BitBucket

Favorite kinds of projects:

  • Anything requiring inventing new algorithms. Whether it's special creating custom fuzzy-search metrics to or finding new ways to algorithmically measure e-commerce product title quality to assessing market growth potential, I enjoy exploring new ideas to either model or simulate human judgement using quantitative methods.
  • Extracting structured data and ML features out of unstructured or non-standardized datasets.

Experience

  • Programming (Python/Cython, VB/VBA, Javascript)
  • AI/ML (Classification, Regression, NLP, DNN, CNN)
  • Cloud (AWS and Azure)
  • BI (PowerBI and Datastudio)
  • DBA (MySql, Postgres, MongoDb)
  • IT (Desktop support, Windows and Linux, Networking/VPN)
  • Websites (Custom HTML/CSS/JavaScript/Apache2/Joomla)
  • Content (Video, Audio, and Photo Editing)

Fun pre-IDSTS projects:

  • Turning text describing product colors, including adjective words into RGB values
  • Assessing e-Commerce product title quality on 30 different metrics
  • Automatically classifying products into existing product categories, and measuring the quality of existing taxonomies
  • Creating new custom product catalog taxonomies using word-vector space and structured attributes
  • Extracting and standardizing product attributes from product titles and descriptions
  • Analyzing survey responses to find most frequently asked questions
  • Creating time-saving add-ins for MS Excel to clean, format, and analyze tables.
  • Clustering search queries into topics and broader themes
  • Creating a parametric ship design program to optimize design, size, and power-plant based on trade route.
  • Identifying phrases that have meaning different than the individual component words they contain
  • Measuring the price impact of structured attributes and phrases to find what is most valuable to highlight
  • Creating a custom fuzzy-text matching program to efficiently find most similar words, phrases and sentences in large corpuses when standard character-ngram, phonetic, and leveshtein based approaches are inadequate.