Is there a theoretical possibility of having a full computer on a silicon wafer instead of a motherboard?

It is theoretically possible. However I would describe it as practically impossible.

When manufacturing silicon chips, you have a certain defect density over your wafer. Usually chips in the center are fine and at the edges more defects are present. This is usually not a big problem: Lets say you have 1000 single chips on one wafer and 20 defects, caused by process variations, particles on the wafer etc. You will get 20 Chips loss at most, which is 2%.

If you manufactured a single chip on this wafer you would have lost 100%. The bigger the Chip gets the smaller your yield gets.

I work in the semiconductor industry and have yet to see a wafer where all chips are fully functional. Nowadays we have very high yield numbers. But still: There are defects on the wafer.

Another thing is: Not all components in your computer can be manufactured on a silicon chip. For example the coils which are used for the DC/DC regulators cannot be implemented on-chip. Inductors on Chips are quite a pain. They're usually only done for > 1 GHz transformers for signals coupling etc. A power inductor with several nH or even uH is not possible (tm).

Another downside are multiple technologies. CPUs are usually done in a very small CMOS technology for the high transistor integration. However, let's say a headphone output has to drive 32 Ohm headphones. Manufacturing a headphone amplifier in a 7 nm FinFET technology is not ideal. Instead you use different semiconductor technologies with lower frequency but higher current capability. There a a lot of different semiconductor technologies that are used to manufacture all the chips for a single computer.

Regarding memories like DRAMs and nonvolatile memories like Flash: These also need specific technologies. One downside of manufacturing modern Microcontrollers (Processors with RAM and ROM on board) is, that the semiconductor process is somewhat bottlenecked by the internal flash these controllers need. More powerful processors usually don't have onboard program memory (except for a very small mask-ROM which holds the bootloader).

It is still better to combine multiple dedicated chips than trying to put everything on one die. As you've already stated: With modern SoCs there are a lot of formerly separate components now on a single IC.

However, putting everything on one chip is

  1. not very flexible
  2. not very cost efficient due to the higher yield losses
  3. not ideal from a technical perspective.

can big silicon companies such as Intel, AMD produce a whole computer with cpu, chipset, RAM, memory controllers, all in a microchip?

Keep in mind that what you might think of as one silicon wafer may actually be multiple chips in one package. A good example is the latest generation AMD Ryzen 9 CPUs that are made of multiple "chiplets" that are bonded together in one package. AMD does it to improve yield and reduce cost, but the same method could be used to provide flash memory, CPU, and DRAM in the same package.

I have not seen a single reference where a whole computer is built inside a chip itself instead of modularizing and spreading it on a board.

What you are describing is a micro-controller or system-on-a-chip.

Many of the micro-controllers and system-on-a-chip devices out there have non-volatile storage, RAM, CPU, and peripherals in one die. In terms of capability they are comparable to a 1980s or early 1990s era PC.

  • CPU clock rates for those devices are in the 10s to 100s of MHz range (which is comparable to a CPU in a 80s and 90s era PC).
  • Flash memory may be up to a few MB (which is comparable to a 1980s era hard drive).
  • RAM may be up to around 1MB (which is comparable to an early 1980s PC).
  • ARM or power PC based chips may feature an MMU. Linux can run on some of those chips.

While not technically one chip, Texas Instruments offers a technology called "Package on Package" for their OMAP mobile phone processors. The PoP chips are BGAs that features solder pads on their top side and balls on the bottom side. Instead of placing several chips next to each-other on a PWB, you stack a CPU, flash memory, and DRAM vertically right on top of each-other.

Another technology that comes close is the Xilinx ZYNQ FPGA. You can get a system running Petalinux with as little as 3 chips + power supplies. Some of the peripherals would also require a physical layer transceiver if they do not fit one of the available IO standards supported by the chip.

  • At a minimum you need to add a flash memory so you can boot the chip. You can then pull the OS off of the flash memory or load it over a network.
  • There are several MB of available onboard memory. But if you want more than that you can add a DRAM chip up to 2 GB.
  • It features either one or two ARM CPU cores running in the 600-700 MHz range that can run petalinux.
  • The ZYNQ chip features lots of built in peripherals such as Ethernet, USB, serial, etc. The only thing you need to add is physical layer transceivers.
  • The chips feature a large piece of FPGA fabric that can be used to crate any additional logic you need.
  • Some things like LVDS video can be created directly from the chip using FPGA resources.

Theoretically, yes. Wafer-scale integration has been discussed in the past.

Practically, no. Manufacturing processes for DRAM and flash are customized and tweaked for those products, so there are extra process steps that are not needed for normal logic. Those process steps drive up the cost of everything on the wafer. Trying to integrate more and more logic on a larger and larger silicon device will lead to a higher number of defects per device, which will increase the need for redundancy and self-repair.

Finding a package to reliably support, connect, and dissipate heat from a very large piece of thin, brittle silicon is another problem.

It just doesn't make sense. If it did, Intel and ARM and AMD would be doing it.