alliance

Notes from the 2024 Alliance Annual Conference

The first week of February, the Alliance for Continuing Education in the Health Professions (ACEhp) met in New Orleans, LA, for their 2024 Annual Conference. There were 3 days of education and networking, with a little music and a few beignets to celebrate being in NOLA during Mardi Gras.

Key Takeaways:
  1. The process of standardizing our shared vocabulary is just that—a process. Definitions may exist, but adoption continues to be slow. The question remains whether to move forward with defining new terms, just trying to promote what’s been done so far, or to give up altogether. A big stumbling block may lie in the technology we use. If we have firm definitions for “learners” and “completers” and “attendees,” but our learning management systems can’t differentiate, it’s hard to report on those specifics
  2. Self-efficacy belief (SEB) has a significant impact on whether learners change their behavior. In order to act, you have to believe that you can, so education should be designed to support learners’ SEB
  3. Education is not in a silo. Changes in technology, funding, patients, providers, and the world all impact education. And education can also address those changes. When designing education, we need to think about the various systems in which we work
  4. Long conferences are a great opportunity to network and learn. They’re also, well, long. So maybe have puppies in the exhibit hall? Sadly, there were no puppies at ACEhp, but we did hear (and see pictures) from another conference that did, with the result of creating a space that was both fun and educational

Notes from the 2024 Alliance Annual Conference Read More »

Exploring AI for Educational Content Creation and Outcomes Assessment With Gregory Salinas, PhD, FACEHP

After attending the 2024 Alliance for Continuing Education in the Health Professions Annual Conference, i3 Health sat down with Dr. Gregory Salinas, President of CE Outcomes, who led a workshop at the conference that explored the emerging uses of artificial intelligence (AI) for continuing medical education/nursing continuing professional development (CME/NCPD) activity development. In this interview, Dr. Salinas further explores the evolving pros and cons of AI and how educational organizations can maximize its potential for content creation and outcomes assessment.

 i3 Health: Welcome to this i3 Health interview. Today, I have the pleasure of being joined by Gregory Salinas, who is the President of CE Outcomes. At the Alliance for Continuing Education in the Health Professions Annual Conference earlier this month, Dr. Salinas led a workshop titled “Mastering AI-Driven Content Creation and Outcome Assessment.” Today, he’s here to share some additional insights from the session.

Dr. Salinas, thank you so much for coming on to talk with us today. Would you like to start off by introducing yourself?

Gregory Salinas, PhD, FACEHP: Sure, I’m the President of CE Outcomes. We are an educational research company. We work with the supporters and providers of education to help them understand things like their target audience, where they like to learn, how they like to learn, and what they need to know, as well as the impact of education. For example, how are clinicians putting education into practice? What are types of education that really make sense and are meaningful to clinicians when they’re managing patients? And what are some ongoing barriers, ongoing gaps, and ongoing needs in the area as we think about future educational efforts?

i3 Health: Awesome, glad to have you on. Today, we’re discussing the workshop that you led at Alliance recently about mastering AI-driven content creation and outcome assessment. To start off, what are some of the ways in which AI is currently emerging as a tool for activity content creation, assessment, and outcomes in the CME/NCPD world?

Dr. Salinas: Right now, I think it is a work in progress. I think that there are a lot of things you can do with AI, and these are the things that we practiced at that workshop, such as developing questions that assess an educational activity. AI is fairly good at responding to the data in slide deck text and being able to put together questions if you ask it in the right way. That’s one of the big takeaways I’ve seen, that if you’re a little rushed for time, you can maybe at least get a start to some questions. It’s not perfect, and some of the questions are relatively easy to answer. Overall, AI is really a good place to start, perhaps, but to be able to layer on some expertise on top of that is always helpful.

It is sometimes not great at figuring out little details or differentiating details, but to start with a blank sheet of paper and say, “Here’s a good place to start,” using AI is a great way to do that. It can help with things like developing images to display a point you’re trying to make. Maybe it can develop images to display a patient interaction for a typical patient you might see. We even got into developing AI videos that can display something from the researcher point of view or maybe even a patient case. I know that a lot of companies sometimes use actors when creating videos to portray a patient, but there may be a way to use AI instead of that to portray a patient as they’re coming into a practice and having people respond and react to their case.

i3 Health: That’s really interesting. That’s a really creative idea to use it for videos in that way.

Dr. Salinas: And sometimes when you’re presenting outcomes back to supporters, it is great to have that conversation with someone describing it, but you can’t always be on that call with every supporter, or they can’t necessarily take that home with them. Maybe having a little box at the top that talks about the outcomes data and talks about the impact of that on patients could be useful. As I was talking about in the workshop, I’m not great at going through a whole paragraph without making mistakes. I like to have my script and have it ready, and the ability to upload that into AI is pretty useful. There are some websites out there that we discussed that even allow you to create your own avatar, so it could even be your face doing that. You’re just not having to talk. You just upload your script and go.

i3 Health: Oh wow, that’s very cool.

Dr Salinas: It’s very cool, and it’s a little scary. We got into that too because it seems pretty close. You can see a couple little things here and there, but it’s pretty close, and it can even use your voice, so it can get interesting. Again, I think that we’re just now getting to this. I think we’re maybe a year and a few months from the launch of ChatGPT, and AI has been a big thing for maybe the last six months. If we think about a year from now, or two years from now, it’s going to be a completely different landscape from where we are now. We’re just now really understanding the point of that.

i3 Health: Definitely. As all these new platforms are emerging, are there any that you think are currently, or will be, the most useful or applicable for CME needs?

Dr Salinas: I think what we’ve seen is that any time something is freely available, it’s probably not that useful, unfortunately. When you compare the basic ChatGPT to the paid version, it’s so much less useful. The paid version will actually let you upload data. You can actually have it make a graph of the data, and then it can even give you some basic statistics on that data if you set it up correctly.

There are other ones that are in the works. You’ve got Google Gemini now. I haven’t really had a lot of experience with that one. Microsoft Copilot, I think, is a great idea to have it built within the software. It’s not there yet; it doesn’t do hardly anything that you would want it to do, but I think that just building it in and then kind of working with that is a great idea. We’re not there yet as far as the conception of it. You can ask it to do things that you think it can do—or possibly ChatGPT 4.0 can do—but it says it can’t. We’re still kind of working out the details there.

i3 Health: You talked a little bit about this, but what are some of the limitations for using AI for CME/NCPD activity creation, and are there any steps that can be taken to help maximize its potential in light of these limitations?

Dr. Salinas: Starting with limitations, I think it’s only as good as what’s put into it. You have to ask it very specific questions. Sometimes, you think you’re asking it questions that are understandable, and it gives you responses that aren’t quite accurate or not quite there yet. I think the idea with this is that you have to think about it almost like giving a task to an intern. They will do exactly what you tell them to, but nothing more. You have to be very careful and look over some things, just like you would look over anything that’s done by an intern and make sure that everything is accurate.

To that point, if you’re having to check everything to make sure it’s accurate, is it really that helpful? Couldn’t you just do that in the first place and be done with it. Probably, but there are some skills we can use to optimize this. Prompt engineering is a big one right now—how do you write that prompt for an AI to get you exactly what you want? We went through a lot of this in the workshop, trying to get it to do what you want through a series of questions. It can take a while, especially if you have data that you’ve uploaded into the AI software platform. It can take a minute per question sometimes. Is it easier for me to just make a graph in Excel versus asking ChatGPT to do it? I could probably do it in 30 seconds, whereas it could sometimes take 15 minutes or so for the AI to get exactly what I want, because it has to review everything and recreate everything. But if you don’t know how to do it and you’ve got a good set of commands, maybe that’s useful for you.

Other things that have to be considered are issues with privacy and copyright issues. I know even with Gemini just being released, it says it’s not going to be using your inputs and the data that you’re putting in for advertisement yet, but it also said “yet.” You have to think, well, then that’s probably coming. They’re going to be personalizing advertisements based on the types of things that you’re inputting. So, be very careful and read all the user agreements if you’re concerned.

There were people within the session who said their companies wouldn’t even allow them to go to the websites on their work computers. We had to work around that and partner up a bit. There are some companies that are very conservative about this, and probably rightfully so with all the potential privacy issues. Copyright issues are another thing as we’re creating images. Just be aware of where that’s coming from and how it’s being used or how it’s being created. I know that with any publication you do, if you use AI in the development of that publication, you have to disclose that. I assume you would have to disclose any individual’s input into that paper or manuscript.

Then we got talking about, well, if you have to disclose that, what do you have to do say if you’re putting together a grant request or if you’re putting together some content or slides using AI? Should you have to disclose that too? There’s no guidance right now. I expect there will be some soon from the Accreditation Council for Continuing Medical Education (ACCME) and other large accreditation or regulatory bodies, but right now there’s not. There are a lot of internal decisions that have to be made on how to use some of these.

i3 Health: It’s really interesting. It definitely sounds like it can be a very useful tool, but it has its caveats and its limitations. It’ll be interesting to see the different types of regulation that come into play as this all plays out.

Dr. Salinas: I worry a little bit that the people who are talking about this are very optimistic. You should be optimistic, but I think you just have to remember at the end of the day how this is being used and what the algorithms really do. It’s almost an autofill, right? It’s kind of a smart autofill. What would you take away from your autofill text if you just hit that middle autofill button all the time and created something? It sounds good, but is it real? Is it true? I think that it just always has to be thought about in that aspect, that there’s not somebody on the other end who’s actually putting a thoughtful response together.

i3 Health: That’s a good analogy and definitely important to keep in mind. My last question is, for continuing education companies that are interested in starting to use AI for their activity creation and outcomes, how do you recommend they can get started?

Dr. Salinas: Most of the people in that audience had started with some basic things like creating titles for their programs, maybe creating outlines—something to start with. Again, I wouldn’t rely completely on AI. Some of the supporters in the audience did note that they were able to notice pretty obviously when people were using AI a bit too much in their grant proposals. I think that using it as a way to start from a blank page and get some ideas going is a great place to start.

One of the big things that we did at that session was just letting people experiment and try some things—putting in some ideas for images, creating some icons and pictures using AI, and struggling a little bit with it to see what the capabilities are. You only get better once you really try to practice with it, like any other skill. So, just sit down and practice, try out some things, and use some tools like a Chat GPT and whatever’s available for your team.

i3 Health: As we wrap up, were there any other points that were made in the workshop or any take-home messages that you’d like to share for the audience?

Dr. Salinas: I think that’s basically it. I’ll say that the big thing, if you’re curious about experimenting with AI in your education, is to just try and see how it works for you. Be very careful as you’re progressing with that. Go slowly. Don’t rely on it completely for 100% accuracy. If you’re doing references, maybe use AI specifically for references. But remember that you could get back things that are completely made up, so f you don’t check it, it might be wrong. Pick a tool, try it out a little bit, and use it. But don’t be scared, but you don’t have to go through with it. You don’t have to go all the way. Just try some things out and see what you can do with them, and report, back because I think we’re all still learning.

i3 Health: Exactly. This has been really fascinating to hear about all the different uses of AI and the important things to keep in mind while using it. Thank you so much for coming on today to talk about this. It was really great to hear about.

Dr. Salinas: It was my pleasure. Thank you.

About Dr. Salinas 

Gregory Salinas, PhD, FACEHP, is the President of CE Outcomes, a strategic partner for educational health care organizations. He is also Past Chair of the Alliance Research Committee and founding Steering Committee Member of the Outcomes Standardization Project. Dr. Salinas leads and engages in research to identify gaps in health care provider education and design innovative methods to fill these gaps.

Transcript edited for clarity. Any views expressed above are the speaker’s own and do not necessarily reflect those of i3 Health.

Exploring AI for Educational Content Creation and Outcomes Assessment With Gregory Salinas, PhD, FACEHP Read More »

Scroll to Top
Skip to content