Teach A Man to Fish
And you are giving him a skill that's probably increasingly obsolete in a world of ocean warming and increasingly oxygen poor waters. Or maybe just making him unfit for more productive activity. In any case, if you then want to teach another guy to fish you need to do the whole rigamarole over again. Other economically valuable skills also take a lot of time and training.
To make a radiologist, say, takes 12 to 14 years of postsecondary training, and the radiologist then may have a career of another 30 but likely not more than 40 years.
Teach a robot to fish though, or how to read radiological images, and you have (potentially) taught them all. Human to human information transfer is slow and very error prone, but computer to computer transfer is millions or billions of times faster and nearly error free if done properly.
I mention this because the AI deniers are at work. The usually acute Sabine Hossenfelder is the latest offender in my mandatory read list.
Here is the most egregious paragraph:
Artificial Intelligences at first will be few and one-of-a-kind, and that’s how it will remain for a long time. It will take large groups of people and many years to build and train an AI. Copying them will not be any easier than copying a human brain. They’ll be difficult to fix once broken, because, as with the human brain, we won’t be able to separate their hardware from the software. The early ones will die quickly for reasons we will not even comprehend.
Let's take it sentence by sentence. (1)True thirty or forty years ago, but not anymore. (2)Sometimes yes, sometimes no. Many key AIs were created almost entirely by one person. (3)Utter nonsense. Copying them is hardly more complicated than copying a table of network weights. (4)Utter nonsense again. Almost every AI now existent is clearly separable into hardware and software - that's a key difference from human and animal brains. (5)Again, pure BS pulled out of the air. A few early analog AIs may have died from decrepitude, but modern digital AIs are ultimately just lists of computer commands - or tables of weights.
To be fair, B. comes up with some potential problems of AI. I think that she only slightly exaggerates the danger that only the super rich will have access to AI. The problem is not so much the AI, per se, but the data that provides it with its knowledge fuel. Right now, most of that data is owned by less than a dozen super rich corporations (Google, Tencent, Amazon, Facebook etc) with a bit more owned by powerful governments. She also asks, how will you know if you are talking to an AI? Probably only if it chooses to tell you. Should one trust the answers of an AI? That is a very good question indeed.
As Gildor said to Frodo, "advice is a dangerous gift, even from the wise to the wise." Of course it's pretty dangerous if it's from somebody who is trying to pick your pocket, too.
But as for me, I'm going to head for Rivendell. Maybe something will turn up.