free counter

CoAuthor: Stanford experiments with human-AI collaborative writing

robotic hands on keyboard, coauthor concept

Image Credit: Mopic/Shutterstock

Were you struggling to attend Transform 2022? Have a look at all the summit sessions inside our on-demand library now! Watch here.

This short article can be an existential crisis. It really is written by a specialist writer authoring artificial intelligence that helps writers write.Theres lots of nagging doubt in my own mind concerning this. Is that okay? After all, shouldnt humans write their very own content?And does this meanthe writing is on the wall for a whole profession?Maybe there is forget about writers?Most of us need to ask ourselves what our roles in this brave ” new world ” will undoubtedly be.

The italicized text above and below was compiled by a big language model. While professional writers may not fear for his or her careers at this time, at the very least by the example above, the model appears to execute a good job grasping this issue accessible and sensing its co-writers (my) existential dread.

Meet CoAuthor. Its an interface, a dataset, and an experiment all in a single. CoAuthor originates fromMina Lee, a doctoral student in computer science at Stanford University, and her advisorPercy Liang, a Stanford associate professor of computer science and director of theCenter for Research on Foundation Models,born from theStanford Institute for Human-Centered Artificial Intelligence, and her collaborator, Qian Yang, an assistant professor at Cornell University.

We believe language models have an enormous potential to greatly help our writing process. Folks are already finding these models to be useful and incorporating them to their workflows. For instance, there are many books and award-winning essays co-authored with such models, Lee says.


MetaBeat 2022

MetaBeat provides together thought leaders to provide help with how metaverse technology will transform just how all industries communicate and conduct business on October 4 in SAN FRANCISCO BAY AREA, CA.

Register Here

Through her experiments, Lee believes that language models are most readily useful and powerful when augmenting human writing skills, instead of replacing them.

We think about a language model as a collaborator in the writing process that may enhance human productivity and creativity, assisting to write more expressively and faster, she says.


AI that helps people write isn’t new. Googles predictive search can be an easy example, as will be the next-word text suggestion algorithms on a smartphone. Other apps assist you to compose a contact as well as write code.So, you will want to create AI that helps humans write well?

Writing computer code or perhaps a text to your friend is really a far cry from writing an arresting poem or perhaps a deft essay. Those pieces requirecreativewriters who invent combinations of words which are original, interesting, and thought-provoking.Its hard to assume a machine writing, say, Cormac McCarthy. But perhaps all thats missing may be the right artificial intelligence tool.

CoAuthor is founded on GPT-3, among the recent large language models from OpenAI, trained on an enormous assortment of already-written text on the web. It could be a tall order to believe a model predicated onexistingtext may be with the capacity of creating something original, but Lee and her collaborators wished to see how it could nudge writers to deviate from their routinesto exceed their safe place (e.g., vocabularies they use daily)to create something that they might not need written otherwise. In addition they wished to understand the impact such collaborations have on a writers personal sense of accomplishment and ownership.

You want to see if AI might help humans achieve the intangible qualities of great writing, Lee says.

Machines are proficient at doing search and retrieval and spotting connections.Humans are proficient at spotting creativity.If you feel this short article is written well, for the reason that of the human author, not regardless of it.

AI/human collaboration

The target, Lee says, had not been to build something that may make humans write better and faster. Instead, it had been to research the potential of recent large language models to assist in the writing process and see where they succeed and fail. They built CoAuthor being an interface that records writing sessions at a keystroke level, curating a big interaction dataset as writers caused GPT-3 and analyzing how human writers and AI collaborate.

Illustration flowchart of how a writer would work with Coauthor
CoAuthor process image via Stanford

The researchers engaged a lot more than 60 visitors to write a lot more than 1,440 stories and essays, each one of these assisted by CoAuthor. Because the writer begins to type, they might press the tab key and the machine presents five suggestions generated by GPT-3. The writer then can accept the suggestions predicated on their own sensibilities, modify them, or disregard them altogether.

As a dataset, CoAuthor monitors all interactions between writers and the model, including text insertion and deletion along with cursor movement and suggestion selection. With this particular rich interaction data, researchers can analyze whenever a writer requests suggestions, how usually the writer accepts suggestions, which suggestions get accepted, how these were edited, and how they influenced the next writing.

Being an analytical tool, CoAuthor can regulate how helpful the accepted suggestions are to the human writer or, conversely, it could interpret rejected suggestions as a proxy for the writers taste to boost its ideas for future language models.

After every writing session, the writers took a survey about their relative satisfaction with the collaboration and their very own sense of productivity and ownership in the resulting work. Often, the writers said, what and ideas proposed by CoAuthor were welcomed as both new and useful.At other times, the suggestions were disregarded since they took the writer in another direction than intended. And sometimes they feltthat the suggestions were too repetitive or vague and, because of this, didnt add much value with their stories and essays.

Lee discovered that the amount of collaboration between GPT-3 and the writers appears to have little influence on their satisfaction in the writing process, nonetheless it could have a poor influence on the sense of ownership of the resulting text. However, many participants enjoyed taking new ideas from the model suggestions and with them in subsequent writing.

I especially found the names helpful, wrote among CoAuthors participants in a post-survey. I was actually attempting to think about a stereotypical rich jock name and the AI provided me with [one]. Perfect!

CoAuthors creators also discovered that the usage of large language models increased writer productivity as measured in the amount of words produced and the period of time spent writing. On a purely practical but intriguing level, the sentences compiled by both a human writer and a model appear to have fewer spelling and grammatical errors but higher vocabulary diversity compared to the human-produced writing, too.

The very best collaborations between a human and a model appear to be once the writer uses their own creative sensibilities to judge the suggestions and decides what things to keep and what things to omit, Lee explains. Overall, they felt CoAuthor brings new suggestions to the table and improves their productivity and their artistry.

Cause for concern?

In the near term, there are several technical hurdles that may need to be surmounted. It really is well documented that large language models are inclined to generating biased and toxic language. Currently, CoAuthor filters out potentially problematic suggestions predicated on a listing of banned words. However, there exists a necessary tension between employing more extensive filtering and the correct evaluation of language model capabilities.

Ultimately,maybe AI with the capacity of producing masterpieces isn’t one which doles outpolished prose or provocative poetry, but instead the sort to provide suggestions that may complement a humans writing. That is already beginning to happen,as CoAuthor ably proves. However,wherever the wordsmith uses technology for aid, artificial intelligence that writes well continues to be quite a distance away.

Andrew Myers is really a contributing writer for the Stanford Institute for Human-Centered AI.

This story originally appeared on Copyright 2022


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, like the technical people doing data work, can share data-related insights and innovation.

In order to find out about cutting-edge ideas and up-to-date information, guidelines, and the continuing future of data and data tech, join us at DataDecisionMakers.

You may even considercontributing articlesof your!

Read More From DataDecisionMakers

Read More

Related Articles

Leave a Reply

Your email address will not be published.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker