Donate
Philosophy and Humanities

Semi-automatized philosophy

Evgeny Konoplev25/05/24 19:54600

Introduction

For all workers of creative professions, the emergence of large language models such as GPT chat and the like is a great temptation. The advantages and threats of this form of mediation of thought are not new — Plato also cites the myth of how the god Thoth presented to the Pharaoh the writing he had invented as a means to improve memory, to which the Pharaoh replied that writing would rather become a poison, weakening the attention and memory of those who use it. Also in the modern world, people who have not read either Plato or Derrida, have not reflected or discussed their texts, proceed from the intuitive idea that a text produced by a machine is obviously devoid of the originality that is supposedly inherent in any text written by a human.

Of course, there is some rational grain in this — though not theoretical, but practical: due to the fact that many topics are very complex and require knowledge of special facts, working with references, as well as tact when approaching the reader, then a text written by a person, will be 1.5-2 times higher quality (and sometimes more) than one generated by a neural network at the user’s request. However, this is not explained by the fact that in our brain, heart or left heel there is an invisible, intangible and unknown entity not registered by any instruments, called the soul, to which metaphysicians, idealists and other superstitious subjects explain the ability of our body to think and feel. And the fact that the neural network of our brain contains more neurons and is trained on a more diverse experience than most existing artificial neural networks. Another significant difference is that the training of our neural network is subjected to the motion of our physiological and social body, while artificial neural networks usually are installed in the memory of computers, and therefore do not have their own life goals. Of course, both are more quantitative than qualitative differences, which, apparently, will be removed by the further course of scientific and technological progress, unless society first destroys itself during the next world war, the likelihood of which, alas, is very noticeable.

However, if we ask whether the author of philosophical, scientific or artistic texts can freely use the generated text, publishing it on his own behalf, we half feel an intuitive rejection of such an idea. Such a thought feels like something very dubious and close to plagiarism: as if in front of us there was a person completely mediocre or already worn out, who, not being able to come up with anything himself, found an excuse for his mental impotence in the use of artificial neural networks, intending to regale the public with template generative texts. And we, as readers who are accustomed to a certain level, naturally do not want to waste time on low-quality texts, which another cunning copywriter together with a stupid neural network can produce in large quantities.

On the other hand, not all the public has good taste. And here, as authors, doubt begins to gnaw at us: what if such cunning copywriters, who do not hesitate to abuse modern technologies, will replace us in the eyes of a wide public? And then we are faced with a temptation: what if we ourselves start using neural networks to generate, if not all, but some trivial parts of the text, which after our editing, of course, will acquire the required quality — while the speed of our work can increase significantly. But where is the line between moderate and conscientious — and immoderate and stereotyped use of neural networks? Without reflecting and discussing this problem, we risk either falling behind life if we neglect the use of neural networks — or incurring public condemnation if we use them to their full potential.

To solve this problem, we need to reflect on three aspects of using neural networks in text generation: public benefit, reader interest, and originality. Let’s look at them in order.

1. Public benefit

The public benefit of using neural networks is obvious: society wants to receive meaningful texts on a variety of topics in sufficient quantity and at sufficient speed. And if the author is able to speed up his work without a significant loss of quality, then the reader will only benefit from this. At the same time, the unconsidered usage of neural networks leads to the production of template texts, the benefit of which is zero, or even negative, since such information noise clogs the cultural space and prevents one from finding truly worthwhile works.

I myself, as a philosopher, deal with dual problems: philosophy as the study of certain ideological problems that are important and/or interesting regardless of the needs of society — and philosophy as work with public consciousness, educational and pedagogical activities. Accordingly, philosophical texts are divided into these two types, depending on what goal they are pursuing: researching something new for themselves or communicating to the public already achieved. The fact that the results of the research may be interesting and useful to other philosophers or the general public does not change the essence of the matter: I would work on assemblage theory as a universal ontology in any case, since this is a key area that allows us to simplify and concretize the worldview, even if the number of people who would become acquainted with these ideas during my lifetime was equal to zero — but, fortunately, this is not the case!

Judging by the experience of communicating with the GPT chat — see the dialogue “On the meaning of life and death” — I can say that this neural network is capable of giving meaningful comments on the proposed theses, although it suffers from a liberal-humanistic bias, is prone to euphemization of sharp corners and is not initially oriented towards the actual significance of thinkers, and their citation in academic and journalistic literature. Nevertheless, as an interlocutor when writing research texts in a dialogical form, she is not only in no way inferior to most interlocutors with carbon brains, but also surpasses them in that she responds quickly, strives for completeness of sources, gives definitions with which one can further work, and also does not allow gross logical errors. Some candidates of philosophical and political sciences, with whom I discussed certain philosophical and scientific problems, in this regard were noticeably not up to par with the GPT chat. Therefore, this neural network is already capable of replacing some of them today — and, moreover, has already replaced them in my philosophical practice! They may object: what if your reasoning methods also seem unsatisfactory to someone, and he begins to communicate with an artificial neural network instead of you? This option suits me quite well — let everyone communicate with the interlocutor and in the way that he likes, even if this interlocutor has a silicon brain. There will be fewer scandals and unproductive criticism in society.

As for educational texts that involve a less creative character, now neural networks, as already mentioned, are capable to produce answers at an average social level of quality at high speed. And this greatly facilitates the process of teaching philosophy. Because as a teacher and educator, not only the quality, but also the quantity of texts is important to me. Let’s say I need to explain to students the history of ancient philosophy. I can refer them to the textbook — but in any of the textbooks the material is not given exactly the way or not at all the way I think would be correct. I can give a course of lectures in person, in zoom or on YouTube — but this is a long and sequential process that requires preliminary preparation, while the student’s desire is to receive a quick answer: the sooner the student begins to master the material, the sooner his thought will come into motion and bring their fruits. In addition, any course or seminar involves reference to texts for independent study. If I give a neural network a number of requests in accordance with the needs of constructing a course, and publish materials with appropriate modifications in the form of a set of articles that at the same time would be useful for students to study independently, do I deviate from the principles of pedagogy, do I cause any harm? educational process? If I worked on writing texts about Anaxagoras and Anaximenes, Democritus and Heraclitus, and all the other philosophers manually, then I would get approximately the same material that the neural network produces, albeit 1.5-2 times better quality — but at least 5- 6 times slower! But will such typewritten texts be as interesting to the reader as those written using only a biological brain?

An analogy from the food industry comes to mind here. Imagine if a farmer living in the early 20th century learned that a hundred years from now, most of the food we will grow will be based on species modified by artificial selection, then genetic engineering, and also fed with various additives, be it food additives for livestock or fertilizer for plants. And he would wonder: wouldn’t such artificial products be something unnatural and tasteless, inconsistent with our human nature? However, as a consumer, it doesn’t matter to me how fruits, vegetables, milk and meat were — the main thing is that they are tasty, healthy and in abundance — and modern science can provide us with both. Most members of society, I believe, will agree with me on this issue.

2. Reader interest

The same is true for maintaining reader interest: as a reader, it doesn’t matter to me whether the text was written by one person, a group of authors, a human-machine system — even a Martian! — as long as it is meaningful and interesting. But texts written by people are often boring or completely meaningless. On the contrary, a capable author will be able to structure a series of queries and edit machine responses in such a way as to interest the public. This means that the point is not whether the text producer thinks with silicon or carbon neurons, but what are the connections between them, whether they are well trained, whether they can imagine present the material in a clear and accessible form, emphasize contrasts, sharpen contradictions — whether they have mastery of writing or not.

But to an even greater extent, the creative, non-trivial nature of working with a neural network is manifested in the transition from individual texts to their system, to a hypertext connected by a network of mutual references. Let’s say, using a series of queries, I created in a short time fifty or a hundred articles describing the main authors, schools, books and ideas of ancient philosophy. Each text is individually created by a neural network — however, the content of the texts, and in particular the architecture of the hypertext, is designed and controlled by me myself. But walking through an extensive network of hyperlinks is one of the most important pleasures of contemporary reader! And control questions, which are easy to place at the end and in the text of each article in addition to hyperlinks, can and should intensify and direct search activity towards unexpected and therefore interesting parallels, when pieces of information form a complete picture. In general, philosophical and scientific activity has in itself something of putting together puzzles or solving puzzles — finding fragments of semantic matter that fit well together and increase our ability to find the next fragments.

3. Originality

Finally, there is the question of originality. I myself, like any reader, have a reasonable question: dear author, why the hell should I read your texts generated by your requests by a neural network, when I myself and anyone can access this neural network directly? Won’t the massive use of typewritten texts lead to a mass of similar and therefore meaningless, sterile works that repeat each other with only minimal differences? This question is not without meaning — but isn’t the same thing already happening today in popular art and university science? What squalor has modern mass cinema reached, in which the same type of characters live out the same type of plots over and over again? The pulp novel has been in this state for a long time, and still has its readers. As for scientific publications, both in world and Russian science there is an acute problem of overproduction of trivial and often pseudoscientific publications. Sometimes things reach the point of absurdity: consider a recent article in a peer-reviewed legal journal, in which, over 40 pages, the authors explain why we have all the troubles. It turns out that there are two types of people on Earth: men and beastmen. The first were created by god, and the second by lord-god (for the authors these are two different entities), with the help of “interplanetary genetic technologies”. And today it is often difficult to clean such nonsense from magazines, and their authors from their positions, since such publications are backed by large capital and long-term corruption schemes, with the help of which theologians, conspiracy theorists, analytical philosophers of consciousness and other charlatans master educational budgets — that is, our money, collected in the form of taxes, which should, in theory, be spent on the cause of education, but in fact go into the pockets of presumptuous thieves and lying obscurantists.

This means that all those problems that we can naively project onto typewritten texts have long been inherent in human-written works, and are solved in the same ways: conscientiousness in the selection of material, curiosity in revealing the topic and openness to constructive criticism — these are the three minimum conditions for the adequacy of writing, common for any methods of text production.

Really: if, besides me, ten more philosophers decide to quickly write their own encyclopedia using neural networks, then due to differences in problems, interests, life experiences and temperament, each will write their own, unique semantic network, different from all the others. If I and a number of other philosophers build a system of concepts, then based on the various problems that occupy us, we will build it in completely different ways. This means there is nothing to fear that the use of neural networks will lead to an abundance of boring and similar texts. On the contrary, only healthy competition in this matter will help determine the features of adequate semi-automated writing.

Conclusions

To summarize, we can say that all concerns about the use of large language models and other versions of weak artificial intelligence in the creative process are, for the most part, far-fetched. There is a demand from the reading public for good and varied texts, and the task of authors is to supply such texts in sufficient volume in the shortest possible time. neural networks are a tool that speeds up the author’s work several times. The neural network does not replace, but complements the writing person’s own intelligence, so that the composite human-machine system turns out to be smarter and more productive than both of them separately.

A person, in relation to a neural network, acts as an architect of sequences of requests, according to which the neural network produces raw text, which is then edited again by a person, and by him, if there is an abundance of material, he assembles it into hypertext. Value the key to such a product for the reader is to become familiar with the original sequence of queries and the architecture of hypertext, which each author working with new tools will have his own.

The GPT chat neural network will tell us the same thing if we ask it how it is possible to combine the uniqueness of human authorship with the use of neural networks that generate text:

"Combining the uniqueness of human authorship with the use of neural networks that generate text can be achieved through several approaches:

1. Training a neural network on unique data: in order to create texts that combine the uniqueness of human authorship and the use of neural networks, it is necessary to train the model on unique and diverse data, including various styles, topics and genres. This will allow the neural network to capture the variety and quality of texts.

2. Editing and finalizing the generated texts: after the neural network creates the text, it can be edited and finalized taking into account the individual style and approach of the author. This will preserve the uniqueness of human authorship in the text.

3. Using neural networks as a tool in the creative process: Instead of relying entirely on a neural network to create texts, it can be used as a tool for inspiration, generating ideas, or filling in gaps in the text. The human author can then refine and refine the text to make it more unique and original.

4. Combination of different approaches: You can combine the unique ideas and concepts of human authorship with the power of neural networks to create texts that are both original and high quality.

It is important to remember that neural networks can be a powerful tool for creating texts, but they are not a replacement for the creativity and uniqueness of human authorship. Combining these two aspects can lead to the creation of unique and high-quality texts."

In my opinion, a completely sound judgment.

Author

Comment
Share

Building solidarity beyond borders. Everybody can contribute

Syg.ma is a community-run multilingual media platform and translocal archive.
Since 2014, researchers, artists, collectives, and cultural institutions have been publishing their work here

About