UTI chat platform by AI Labs SundayPyjamas

Revolutionizing Women's Health Access

A comprehensive online directory for all immigration programs and portfolio management services for Indian HNIs.

The UTI chat App tackles the global challenge of limited access to gynecological and urological care for women. It leverages OpenAI's GPT-4 technology to create a user-friendly chat interface, offering:
  • Instant, personalized medical advice tailored to individual needs.
  • Accessibility and anonymity overcoming geographical and social barriers.
  • AI-powered companion for women to understand their bodies and health concerns.

Problem Statement

Millions of women worldwide lack access to essential gynecological care. Stigma and information gaps further exacerbate the issue.

  • 50% of women lack access to basic care.

  • UTIs affect 60% of women in their lifetime, yet many suffer silently.

  • 80% experience menstrual problems, with only 30% seeking help.

Solution

The NextJS ChatGPT App acts as a 24/7 health companion, empowering women to take charge of their well-being. It fosters:

  • Instant access to information and advice.

  • Reduced stigma through a safe, anonymous platform.

  • Knowledge and confidence in managing women's health issues.


Technical Architecture

Front-End (Next.js, React, Joy UI): Ensures a responsive, user-friendly interface across devices.

AI Core (OpenAI's GPT-4): Powers the chat functionality, providing intelligent and personalized responses.

Back-End (TypeScript, Supabase): Handles user authentication and ensures secure data management.

Streaming Implementation (Custom): Enables real-time interaction, mimicking a natural conversation flow.

Modular Design: Facilitates future scalability and integration with new technologies.


 



The NextJS ChatGPT App acts as a 24/7 health companion, empowering women to take charge of their well-being. It fosters:

  • Instant access to information and advice.

  • Reduced stigma through a safe, anonymous platform.

  • Knowledge and confidence in managing women's health issues.


Technical Architecture

Front-End (Next.js, React, Joy UI): Ensures a responsive, user-friendly interface across devices.

AI Core (OpenAI's GPT-4): Powers the chat functionality, providing intelligent and personalized responses.

Back-End (TypeScript, Supabase): Handles user authentication and ensures secure data management.

Streaming Implementation (Custom): Enables real-time interaction, mimicking a natural conversation flow.

Modular Design: Facilitates future scalability and integration with new technologies.


 



The NextJS ChatGPT App acts as a 24/7 health companion, empowering women to take charge of their well-being. It fosters:

  • Instant access to information and advice.

  • Reduced stigma through a safe, anonymous platform.

  • Knowledge and confidence in managing women's health issues.


Technical Architecture

Front-End (Next.js, React, Joy UI): Ensures a responsive, user-friendly interface across devices.

AI Core (OpenAI's GPT-4): Powers the chat functionality, providing intelligent and personalized responses.

Back-End (TypeScript, Supabase): Handles user authentication and ensures secure data management.

Streaming Implementation (Custom): Enables real-time interaction, mimicking a natural conversation flow.

Modular Design: Facilitates future scalability and integration with new technologies.


 



The NextJS ChatGPT App acts as a 24/7 health companion, empowering women to take charge of their well-being. It fosters:

  • Instant access to information and advice.

  • Reduced stigma through a safe, anonymous platform.

  • Knowledge and confidence in managing women's health issues.


Technical Architecture

Front-End (Next.js, React, Joy UI): Ensures a responsive, user-friendly interface across devices.

AI Core (OpenAI's GPT-4): Powers the chat functionality, providing intelligent and personalized responses.

Back-End (TypeScript, Supabase): Handles user authentication and ensures secure data management.

Streaming Implementation (Custom): Enables real-time interaction, mimicking a natural conversation flow.

Modular Design: Facilitates future scalability and integration with new technologies.


 



Technical Highlights

  • Cross-platform Accessibility (Next.js): Guarantees seamless performance on smartphones, laptops, and various devices.

  • Multi-language Support: Breaks down language barriers, reaching a wider global audience.

  • Bank-Grade Security (Supabase): Prioritizes user privacy with secure authentication and ephemeral conversations.

  • Real-time Interaction (Custom Streaming): Creates a natural and engaging chat experience.

Technical Innovation

Code Execution for Health Data Visualization: Enables users to analyze menstrual data and gain insights into their health trends.

Ephemeral Conversations: Promotes user privacy by automatically deleting chat history, safeguarding sensitive information.

Front-End Development

After creating project and component structure, create state for chat functionality.

  • Use React's state management (e.g., useState) to store chat messages and user input.

  • Implement a function to send messages to the back-end API.

  • Use a streaming mechanism (e.g., server-sent events) to receive real-time responses from the AI.

  • Render the chat messages using the created components.

Back-End Development

Supabase Setup:

  • Create a Supabase project and configure authentication and storage settings.

  • Install the Supabase client library:
    npm install @supabase/supabase-js

API Routes:

  • Create a pages/api/chat.ts file to handle chat requests.

  • Use Supabase authentication to verify user credentials.

  • Send the user's message to the OpenAI API.

  • Handle the response from the API and return it to the front-end.

OpenAI Integration:

  • Install the OpenAI client library: npm install openai

  • Use the OpenAI API key to authenticate and make requests.

  • Construct the request payload with the user's message and desired model (e.g., GPT-4).

  • Send the request and handle the response.

Supabase Setup:

  • Create a Supabase project and configure authentication and storage settings.

  • Install the Supabase client library:
    npm install @supabase/supabase-js

API Routes:

  • Create a pages/api/chat.ts file to handle chat requests.

  • Use Supabase authentication to verify user credentials.

  • Send the user's message to the OpenAI API.

  • Handle the response from the API and return it to the front-end.

OpenAI Integration:

  • Install the OpenAI client library: npm install openai

  • Use the OpenAI API key to authenticate and make requests.

  • Construct the request payload with the user's message and desired model (e.g., GPT-4).

  • Send the request and handle the response.

Supabase Setup:

  • Create a Supabase project and configure authentication and storage settings.

  • Install the Supabase client library:
    npm install @supabase/supabase-js

API Routes:

  • Create a pages/api/chat.ts file to handle chat requests.

  • Use Supabase authentication to verify user credentials.

  • Send the user's message to the OpenAI API.

  • Handle the response from the API and return it to the front-end.

OpenAI Integration:

  • Install the OpenAI client library: npm install openai

  • Use the OpenAI API key to authenticate and make requests.

  • Construct the request payload with the user's message and desired model (e.g., GPT-4).

  • Send the request and handle the response.

Supabase Setup:

  • Create a Supabase project and configure authentication and storage settings.

  • Install the Supabase client library:
    npm install @supabase/supabase-js

API Routes:

  • Create a pages/api/chat.ts file to handle chat requests.

  • Use Supabase authentication to verify user credentials.

  • Send the user's message to the OpenAI API.

  • Handle the response from the API and return it to the front-end.

OpenAI Integration:

  • Install the OpenAI client library: npm install openai

  • Use the OpenAI API key to authenticate and make requests.

  • Construct the request payload with the user's message and desired model (e.g., GPT-4).

  • Send the request and handle the response.

Streaming Implementation

Server-Sent Events:

  • Set up server-sent events in the back-end API to send real-time responses to the front-end.

  • Use a library like sse-stream to create a server-sent event stream.

  • Push new messages to the stream as they are received from the AI.

Front-End Listening:

  • In the front-end, create an event listener to listen for messages from the server-sent event stream.

  • Update the chat state with the received messages.

Server-Sent Events:

  • Set up server-sent events in the back-end API to send real-time responses to the front-end.

  • Use a library like sse-stream to create a server-sent event stream.

  • Push new messages to the stream as they are received from the AI.

Front-End Listening:

  • In the front-end, create an event listener to listen for messages from the server-sent event stream.

  • Update the chat state with the received messages.

Server-Sent Events:

  • Set up server-sent events in the back-end API to send real-time responses to the front-end.

  • Use a library like sse-stream to create a server-sent event stream.

  • Push new messages to the stream as they are received from the AI.

Front-End Listening:

  • In the front-end, create an event listener to listen for messages from the server-sent event stream.

  • Update the chat state with the received messages.

Server-Sent Events:

  • Set up server-sent events in the back-end API to send real-time responses to the front-end.

  • Use a library like sse-stream to create a server-sent event stream.

  • Push new messages to the stream as they are received from the AI.

Front-End Listening:

  • In the front-end, create an event listener to listen for messages from the server-sent event stream.

  • Update the chat state with the received messages.

Streaming Implementation

Server-Sent Events:

  • Set up server-sent events in the back-end API to send real-time responses to the front-end.

  • Use a library like sse-stream to create a server-sent event stream.

  • Push new messages to the stream as they are received from the AI.

Front-End Listening:

  • In the front-end, create an event listener to listen for messages from the server-sent event stream.

  • Update the chat state with the received messages.

Sandpack Integration:

  • Install the Sandpack library: npm install sandpack

  • Create a Sandpack component to handle code execution.

  • Pass the code to the Sandpack component and receive the execution result.

Server-Sent Events:

  • Set up server-sent events in the back-end API to send real-time responses to the front-end.

  • Use a library like sse-stream to create a server-sent event stream.

  • Push new messages to the stream as they are received from the AI.

Front-End Listening:

  • In the front-end, create an event listener to listen for messages from the server-sent event stream.

  • Update the chat state with the received messages.

Sandpack Integration:

  • Install the Sandpack library: npm install sandpack

  • Create a Sandpack component to handle code execution.

  • Pass the code to the Sandpack component and receive the execution result.

Server-Sent Events:

  • Set up server-sent events in the back-end API to send real-time responses to the front-end.

  • Use a library like sse-stream to create a server-sent event stream.

  • Push new messages to the stream as they are received from the AI.

Front-End Listening:

  • In the front-end, create an event listener to listen for messages from the server-sent event stream.

  • Update the chat state with the received messages.

Sandpack Integration:

  • Install the Sandpack library: npm install sandpack

  • Create a Sandpack component to handle code execution.

  • Pass the code to the Sandpack component and receive the execution result.

Server-Sent Events:

  • Set up server-sent events in the back-end API to send real-time responses to the front-end.

  • Use a library like sse-stream to create a server-sent event stream.

  • Push new messages to the stream as they are received from the AI.

Front-End Listening:

  • In the front-end, create an event listener to listen for messages from the server-sent event stream.

  • Update the chat state with the received messages.

Sandpack Integration:

  • Install the Sandpack library: npm install sandpack

  • Create a Sandpack component to handle code execution.

  • Pass the code to the Sandpack component and receive the execution result.

Deployment

Vercel Deployment:

  • Create a Vercel project and connect it to your GitHub repository.

  • Set up environment variables for your OpenAI API key and Supabase credentials.

  • Deploy the app to Vercel.

Implementing Response Restrictions in the NextJS ChatGPT App

To ensure the NextJS ChatGPT App provides safe and responsible advice, it's crucial to restrict certain responses that could be misconstrued as medical advice or recommendations. Here's a step-by-step guide on how to implement these restrictions.

Define Restricted Topics:

  • Create a list of restricted topics or keywords that should be avoided in the AI's responses.

  • Examples: Medications, dosages, diagnoses, surgical procedures, specific medical treatments.

Modify the Prompt Engineering:

  • When crafting the prompt sent to the OpenAI API, be explicit about the restrictions.

  • Use negative prompts or instructions to guide the AI away from providing restricted information.

Modify the Prompt Engineering:

  • When crafting the prompt sent to the OpenAI API, be explicit about the restrictions.

  • Use negative prompts or instructions to guide the AI away from providing restricted information.

For Example

Modify the Prompt Engineering:

  • When crafting the prompt sent to the OpenAI API, be explicit about the restrictions.

  • Use negative prompts or instructions to guide the AI away from providing restricted information.

Modify the Prompt Engineering:

  • When crafting the prompt sent to the OpenAI API, be explicit about the restrictions.

  • Use negative prompts or instructions to guide the AI away from providing restricted information.

const prompt = `Act as a helpful and informative AI assistant for women's health. Provide general advice on gynecological and urological topics. However, **avoid providing specific medical advice, such as recommending medications, diagnoses, or treatments**. Focus on general information, symptom management, and self-care strategies.`;

Filter Generated Responses:

  • After receiving the response from the OpenAI API, implement a filtering mechanism to check for restricted keywords or phrases.

  • If a restricted topic is detected, either:

    • Modify the response: Remove the restricted part or rephrase it to provide a more general answer.

    • Redirect the user: Guide the user towards seeking professional medical advice.

function filterResponse(response, restrictedTopics) {
  const filteredResponse = response.replace(
    new RegExp(`(${restrictedTopics.join('|')})`, 'gi'),
    'I cannot provide specific medical advice on that topic. It\'s best to consult with a healthcare professional.'
  );
  return filteredResponse;
}

Provide Clear Disclaimers:

  • Inform users that the app is not a substitute for professional medical advice.

  • Display a disclaimer prominently within the app.

Example:

Disclaimer: This app is intended for informational purposes only and should not be considered a substitute for professional medical advice. Always consult with a healthcare provider for personalized guidance.

Additional Considerations

Error Handling: Implement error handling mechanisms to gracefully handle exceptions and provide informative feedback to the user.

Accessibility: Ensure the app is accessible to users with disabilities by following accessibility guidelines (e.g., WCAG).

Testing: Conduct thorough testing to identify and fix bugs before deployment.

Performance Optimization: Optimize the app's performance by minimizing network requests, using efficient data structures, and leveraging caching techniques.