NOTE: This content is made available under the Creative Commons CC-BY-NC-SA 4.0 License (Attribution-NonCommercial-ShareAlike, View License Deed | View Legal Code).
Generative AI is a useful tool that can streamline many processes, including writing. Many sections of this website were developed or edited using generative AI as a tool, mostly OpenAI ChatGPT-4 Plus with Plugins, and Anthrop/c Claude 2. Consequently, I have developed my own very simple system of codes I use on posts where generative AI was used in any way, that tells me how a text was created, and an annotated list is included below.
All entries that use AI are clearly noted as such. Many texts on this blog (starting in early summer 2023) are a complex mesh of several of the processes listed below: I often start with a free-flowing dictation with ideas, transcribed by my phone’s on-board chip (the tensor chip on a Google pixel phone is really good at doing this). That’s the (HA), on the list below. Then, this initial stream of ideas is organized and rewritten in grammatically correct form, using a very carefully crafted and tested AI platform-specific prompt (what works great with ChatGPT-4 Plus is not necessarily going to work as well with Claude2, and the other way round). That’s the (AIA), below. Then I read an edit the AI-generated text carefully – sometimes it’s close to what I intended, sometimes it requires more work – that’s the (HE).
As for (AIG) texts, their use requires a clarification. Sometimes, I just need a quick starter text for a specific purpose – I just use it as is. This only works with text in which I can easily decide if the generated output makes sense or not: for example, asking ChatGPT-4 Plus for a checklist I should review when planning a live, in-person workshop is a good case in point, because any “hallucinatory” text elements will be obvious: “if serving food, check everyone’s preferences and dietary restrictions” is good advice; “make sure everyone drove,” is not, is absurd, and immediately obvious to any sensible human as such.
All the (AIG) texts are treated as initial starter idea to be reviewed by a human. It would be ill-advised and unwise, if not outright irresponsible to use (AIG) texts in practice without going through a thorough practical review or editing session. So, while I think it’s OK to quickly post such text here as an example, it would not be a good idea to use such a text for a classroom activity, or use a AIG quiz for real, graded assessment, before taking it through the (HE) phase first. Finally, (AISA) texts are a quick way to process a large number of initial findings – the process assumes that the summaries AI generates are mostly accurate, to a degree that allows a high-level (no fine detail) comparison.
The abbreviations used include:
- (HA) = Human-Authored:
means that the text was written by a human and may have been corrected for spelling and grammar by one of generative AI systems, with clear instructions that prevent changes beyond those necessary for correcting spelling and grammar. A detailed list of all changs and their justification is also generated and reviewed. - (HE) = Human-Edited:
this means that after any processes listed below (AIG, AIA, and AISA) the text was extensively edited by a human. This often involves substantial re-writes and structural, stylistic, and information changes. - (AIG) = AI-Generated:
the text was generated as a result of a detail-specific prompt carefully crafted and tested. The generated text was reviewed for content and style, and approved by the blog author. - (AIA) = AI-Assisted:
the content started as a human-generated draft, and was later AI edited, usually for spelling, grammatical correctness, and logical flow/ coherence as a result of a prompt carefully crafted and tested by the blog author. The edited text was reviewed for content and style. - (AISA) = AI summary available:
this label usually accompanies original content (articles, videos, podcasts), for which AI-generated summary and analysis has been provided.
Context & Background (AIG-HE):
Generative AI, a technology that uses machine learning to create content or predictions, is poised to become a major disruptive force in our civilization. Its potential impact can be likened to the discovery of fire, the invention of the wheel, or the advent of the printing press – all pivotal moments in human history that fundamentally reshaped society.
Fire, discovered by our early ancestors, was a transformative tool that allowed humans to cook food, ward off predators, and survive in harsh climates. The wheel, invented in ancient Mesopotamia around 3500 B.C., revolutionized transportation and later machinery, enabling the growth of trade and agriculture. The printing press, introduced in the 15th century, made books more affordable, gradually paving the way for the spread of literacy and innovative ideas.
Generative AI holds a similar transformative potential. Trained on appropriate dataset, it can create content, from writing articles to composing music, and analyze trends, such as forecasting weather or stock market. This technology will likely disrupt and reshape almost all areas of human activity. However, like any disruptive technology, generative AI also presents challenges. It could lead to job displacement in sectors where AI can perform tasks more efficiently than humans. Ethical concerns arise around the creation of deepfakes and the potential for AI to generate misleading or harmful content and misuse.
As we stand on the brink of this new era, it’s crucial to consider how we can harness the benefits of generative AI while mitigating its risks. This includes the development of regulations and ethical guidelines to ensure the responsible use of this technology.
One of the key responsibilities of educational institutions, such as this research university, and especially of teams that focus on the intersection of technology and learning will be to foster the understanding of AI tools, and to educate faculty and students on effective, ethicals uses of this novel but disruptive tool that contains a great promise, but just as big of a risk.
