Skip to content

2024

Cognitive Architecture Patterns in Health Care for LLMs

We're inspired by the ideas from this Cognitive Architecture paper and an insightful Langchain Blog by Harrison Chase.

At RevelAI Health, we're exploring how to create closed-loop, safe agents in healthcare — systems that can reason and execute on patient needs in a secure and reliable way. The key is understanding how these agentic systems should think, the flow of execution in response to patient intent, and ensuring safety through structured, observable loops.

My desk setup in 2024

My desk setup as of writing this post. I've been working from home for the past 1 year and have been slowly evolving my setup to be more ergonomic and efficient.

Also, I want to use this blog to track changes for my setup over time and share with fellow devs / team-mates.

My desk setup in 2024

How LLMs Revolutionized My Productivity

In the ever-evolving landscape of LLMs, I've observed two distinct camps: the doomsayers who predict a dystopian future and the overly optimistic who claim AI has completely transformed their lives overnight. As for me? I find myself somewhere in the middle – cautiously optimistic about the technology's potential while actively seeking ways to harness it for practical, everyday use.

From Concept to Production with Observability in LLM Applications

Understanding observability in AI applications, particularly in Large Language Models (LLMs), is crucial. It's all about tracking how your model performs over time, which is especially challenging with text generation outputs. Unlike categorical outputs, text generation can vary widely, making it essential to monitor the behavior and performance of your model closely.