Without a doubt, Artificial Intelligence (AI) now delivers capabilities that can support, compensate for, and partially replace human intelligence in many ways. This paper asks where the principal limits of AI lie and whether one can even speak of intelligence in the true sense of the word. If human and artificial intelligence are judged only by their symptoms, the difference increasingly levels out, or the pendulum already swings in the other direction. A reduced concept of intelligence is likely at play here. By drawing on hermeneutics and the Tractatus philosophy of Ludwig Wittgenstein, it is shown that an AI system like ChatGPT exhibits fundamental deficits. It cannot understand language and cannot learn. While it can draw conclusions deductively as an expert system, it can only do so statistically in an inductive manner, and abductively not at all. Furthermore, an AI system like ChatGPT is a closed self-referential system that tends towards paradoxes and self-replication through autophagy, is conservative and susceptible to manipulation, and in the long run reduces the quality and scope of our knowledge of reality. A technology-critical approach is required, as well as the overcoming of the behavioristic and naively technical paradigm of AI, along with a critical metaphor analysis of AI anthropomorphisms. In this way, foundations can be created for meaningful language about AI, and thus also for usable theoretical concepts and models for its utilization and further development.