Jump to content

Recommended Posts

Posted

I’m not sure if this belongs here or at Tech Talk, I’m sure the mods will move it if needed.

 

Providers often say one of their biggest frustrations is the sheer amount of time spent replying to messages, especially from time wasters. At the same time, clients regularly complain about slow or nonexistent responses from providers. That gap makes me wonder: Does it make sense to let an AI agent step in?

An AI could instantly handle basic questions, screen out nonsense, and book appointments without emotion, burnout, or delays. No mood swings. No forgotten messages. No “sorry, just seeing this now.”

Of course, that raises obvious questions about authenticity, trust, but given how common these issues are on both sides, is this an inevitable evolution or a step too far?

Posted

I don't like the idea of having to deal with an AI pimp. 😀

When agencies were more abundant, only one I dealt with seem to be straight forward. The other I dealt with I often, felt like I was getting the runaround when I asked for/about specific providers. With everyone, I felt a level of detachment from the provider because I couldn't ask him directly certain things. Every provider I met through an agency ultimately gave me their direct contact because they hated having to have contact with a go-between.

Yes, I see the advantages of AI for a swamped provider, but there is also the possible issue of AI giving incorrect information and the disconnect it provides.

 

Posted (edited)

Maybe, but i don't think it'd go as far as suggested. I could see it maybe screening initial inquiries if a provider so chooses. But if i was a provider, i personally would still want to have viable client candidates given to me by the AI and give the final ok on which one i take and when for myself. I just would want it to weed out time wasters or incompatible clients. I wouldn't want it accepting meets and booking my calendar for me.

As someone who's tried implementing AI into my job and found a lot of it's limitations and where it falls short of the over hype/marketing. There's things it's just not ready for yet and it still needs lots of hand-holding and double checking it's work. AI/LLMs (Large Language Models) despite all the marketing, are not sentient, aren't aware of what they say to you, will spit out incorrect information confidently, and AI agents have done things like accidentally delete a users hard drive and other dumb stuff.

When it comes to reply delays. I kind of see it like when people text you and don't get a response, so then they call you thinking that will make you pick up and it's like, if I'm not available to reply to a text I'm also not available to answer a call haha. But like in the scenario a provider is busy working and can't give the final ok to a meet the AI setup and passed along. There's still going to be a delay till the provider gets back to their computer/phone or whatever to give the final ok. Both clients and providers really need to just accept that people aren't staring at their inbox or obligated to be reachable 24/7 and get over it, in my opinion 😆.

Another big issue AI use in this raises is discretion/privacy. Unless the provider is using their own self hosted implementation of an open source AI. Things like ChatGPT are logging everything you input to it and sending it home to OpenAI for further training model use (some AI platforms claim to let you opt out of this, but do they really with how this space currently has zero regulation, no one actually knows).

Things you put into ChatGPT are held in record and can be used against you in court for example. And in places where what we all engage in here isn't exactly legal, it'd be very not smart to use AIs in this way unless you setup a private one. Which the average person wouldn't know how to do and would just default to ChatGPT.

We already can't even get everyone who does this to use encrypted messaging apps like Signal as is 🤣

Edited by DMonDude
Posted
On 12/13/2025 at 9:07 PM, DMonDude said:

Maybe, but i don't think it'd go as far as suggested. I could see it maybe screening initial inquiries if a provider so chooses. But if i was a provider, i personally would still want to have viable client candidates given to me by the AI and give the final ok on which one i take and when for myself. I just would want it to weed out time wasters or incompatible clients. I wouldn't want it accepting meets and booking my calendar for me.

As someone who's tried implementing AI into my job and found a lot of it's limitations and where it falls short of the over hype/marketing. There's things it's just not ready for yet and it still needs lots of hand-holding and double checking it's work. AI/LLMs (Large Language Models) despite all the marketing, are not sentient, aren't aware of what they say to you, will spit out incorrect information confidently, and AI agents have done things like accidentally delete a users hard drive and other dumb stuff.

When it comes to reply delays. I kind of see it like when people text you and don't get a response, so then they call you thinking that will make you pick up and it's like, if I'm not available to reply to a text I'm also not available to answer a call haha. But like in the scenario a provider is busy working and can't give the final ok to a meet the AI setup and passed along. There's still going to be a delay till the provider gets back to their computer/phone or whatever to give the final ok. Both clients and providers really need to just accept that people aren't staring at their inbox or obligated to be reachable 24/7 and get over it, in my opinion 😆.

Another big issue AI use in this raises is discretion/privacy. Unless the provider is using their own self hosted implementation of an open source AI. Things like ChatGPT are logging everything you input to it and sending it home to OpenAI for further training model use (some AI platforms claim to let you opt out of this, but do they really with how this space currently has zero regulation, no one actually knows).

Things you put into ChatGPT are held in record and can be used against you in court for example. And in places where what we all engage in here isn't exactly legal, it'd be very not smart to use AIs in this way unless you setup a private one. Which the average person wouldn't know how to do and would just default to ChatGPT.

We already can't even get everyone who does this to use encrypted messaging apps like Signal as is 🤣

 

I agree that AI still has a long way to go and is far from perfect. That said, in my very limited experience, it performs surprisingly well at repetitive tasks like answering messages, especially when it’s properly “trained” (RAG). I also agree that letting it handle actual bookings may be a step too far for now.

As for concerns about the content being used against you in court, I wouldn’t lose much sleep over that for two simple reasons. First, running an AI agent anonymously isn’t particularly difficult. Second, those messages already live on your phone and with your mobile provider, so the privacy ship sailed a long time ago.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...