AI Gave Engineers 20 Hours Back Per Week. Why Aren't We Shipping Faster?
AI has compressed time in my life. This time compression has unlocked a lot, but perhaps not in the ways you'd expect.
Nothing you don't already know, right?
Following are my personal thoughts on tech, AI, startups and adoption of AI in Health-Care. You could read more about me here
AI has compressed time in my life. This time compression has unlocked a lot, but perhaps not in the ways you'd expect.
Nothing you don't already know, right?
Healthcare is littered with brittle decision-trees. Pre-op instructions, chronic-care check-ins, discharge follow-ups—each new edge-case multiplies the branches. Most workflows we have are cron-based, running at specific times and are very hard to personalize around patient needs.
We're inspired by the ideas from this Cognitive Architecture paper and an insightful Langchain Blog by Harrison Chase.
At RevelAI Health, we're exploring how to create closed-loop, safe agents in healthcare — systems that can reason and execute on patient needs in a secure and reliable way. The key is understanding how these agentic systems should think, the flow of execution in response to patient intent, and ensuring safety through structured, observable loops.
When deploying a healthcare product, HIPAA compliance is crucial. No matter how innovative your solution is, without convincing the CIO or security team, you won't get deployed. I view security and HIPAA posture as essential features of any healthcare product.
I recently listened to Pieter Levels on the Lex Friedman Podcast, and it was eye-opening. Pieter has built numerous successful micro-SaaS businesses by running his applications on single server, avoiding cloud infrastructure complexity, and focusing on what truly matters: product-market fit.
My desk setup as of writing this post. I've been working from home for the past 1 year and have been slowly evolving my setup to be more ergonomic and efficient.
Also, I want to use this blog to track changes for my setup over time and share with fellow devs / team-mates.
In the ever-evolving landscape of LLMs, I've observed two distinct camps: the doomsayers who predict a dystopian future and the overly optimistic who claim AI has completely transformed their lives overnight. As for me? I find myself somewhere in the middle – cautiously optimistic about the technology's potential while actively seeking ways to harness it for practical, everyday use.
Understanding observability in AI applications, particularly in Large Language Models (LLMs), is crucial. It's all about tracking how your model performs over time, which is especially challenging with text generation outputs. Unlike categorical outputs, text generation can vary widely, making it essential to monitor the behavior and performance of your model closely.
Recently, there has been a surge of enthusiasm surrounding large language models (LLMs) and generative AI, and justifiably so: LLMs have the power to revolutionize entire industries. Yet, this enthusiasm often gives rise to inevitable hype. It appears somewhat counterintuitive to avoid incorporating “AI” into a product’s presentation, considering the immediate market interest it can generate.
Guest Post on Twilio Blog on how we Vincere used Twilio to Reinforce Healthy Habits That Improve the Lives of Underserved Populations