...
دوشنبه ,۲۸ آبان ۱۴۰۳
تقویت مهارت شنیداری و یادگیری کلمات هوش مصنوعی
تقویت مهارت شنیداری و یادگیری کلمات هوش مصنوعی

تقویت مهارت شنیداری و یادگیری کلمات هوش مصنوعی

در این قسمت از آموزش زبان انگلیسی در زبان علم، ضمن تقویت مهارت شنیداری و یادگیری کلمات هوش مصنوعی در مورد یکی از پیشرفتهای علمی تخیلی(science-fiction) در حوزه هوش مصنوعی(Artificial intelligence) می پردازیم.

خلق ربات ها اگر چه زندگی در عصر جدید را متحول کرده است و بسیاری از کارهای روتین روزمره را می توان به آنها سپرد، اما چالش های جدیدی نیز پیش رو قرار داده است. یکی از سوالاتی که ذهن دانشمندان را مشغول کرده است نحوه تصمیم گیری ربات ها، بخصوص تصمیم گیری های اخلاقی است. به این منظور قوانینی را برای برنامه ریزی رباتها تدوین کرده اند، اما با این حال نگرانی های ناشی از تکامل آنها همچنان وجود دارد. در این فایل صوتی که به همراه متن آمده است به این موضوع پرداخته می شود.

 

برای تقویت مهارت شنیداری و یادگیری کلمات هوش مصنوعی به فایل صوتی زیر گوش کنید و سعی کنید نکات اصلی و پیام آن را بفهمید

برای تقویت مهارت شنیداری و یادگیری کلمات هوش مصنوعی مجددا فایل صوتی را به همراه متن مجدد گوش کنید و به نحوه تلفظ کلمات دقت کنید. ضمن اینکه سعی کنید مطالب بیشتری از متن را متوجه شوید.

Living in the year 2020 is great. Every morning, my robot butler brings me my replicated coffee and at work, my friendly android colleague and I make the weekly podcast. Well, okay, the year 2020 isn’t exactly how science-fiction writers of the past may have imagined. Artificial intelligence is everywhere. It’s more likely to be found in an algorithm sending me targeted advertising than in a helpful robot. But one thing that’s still important is the question of how machines make decisions, in particular, ethical decisions. One science-fiction writer who’s had a big impact on these kinds of ideas is Isaac Asimov. Now, Asimov’s personal ethics were dubious. He was well known during his lifetime for his unapologetic harassment of women. Despite this, his writing, in particular, stories imagining how robots might be designed to follow simple, ethical rules, continues to inspire debate. This month marks 100 years since Asimov was born, and Nature has published an essay on Asimov’s work by David Leslie, Ethics Fellow at the Alan Turing Institute. Reporter Shamini Bundell set out to talk to David about whether Asimov’s ideas about robotic ethics still apply, as artificial intelligence becomes more prevalent. She found him at the Institute, which is based in London, nestled in the centre of the British Library.

Here we are in the British Library. I can see just huge stacks of antique books here already, and how many Asimov books do you reckon they’ve got in the library here?

Hundreds. I hope that it would be nearly 500 that exist, or around 500.

As well as being an Asimov fan, David has a particular interest in Asimov’s vision of a future where humans live and work alongside robots.

What Asimov did was he took a world in which robots were portrayed as kind of alien monsters, right, and he tried to make the stories more realistic, where they’re exploring the possibilities that are opened up by robotics and what we now call artificial intelligence.

And you’ve got some books on the table here, including I, Robot, which is a number of short stories. Could you give us some idea about the kind of things that came up in these stories that he discusses in these books that were quite new at the time and have had quite a lot of influence since then?

Well, I mean, I think we would have to first talk about the famous three laws of robotics, which actual arise in the I, Robot series, and the three laws are, basically: a robot must not injure a human being or allow, through inaction, a human being to come to harm. That’s the first law. The second law is: a robot must obey orders given by a human being unless that contradicts with the first law. And the third law is: a robot must protect their existence, unless that protection would come into conflict with the first two laws. So, the stories that he wrote in I, Robot and the other robot stories beyond that had to do with the ways in which these three laws play out in real world circumstances. For instance, I’m thinking of one in particular where he has a robot, I think it was Herbie, who is able to read minds and Herbie started to lie. So here it is, a very interesting passage:

“She [Calvin] faced them and spoke sarcastically. “Surely you know the fundamental First Law or Robotics.

The other two nodded together.

“Certainly,” said Bogert, irritably. “A robot may not injure a human being or, through inaction, allow him to come to harm.

“Nicely put,” sneered Calvin. “But what kind of harm?”

“Why – any kind.”

“Exactly! Any kind! But what about hurt feelings? What about deflation of one’s ego? What about the blasting of one’s hope? Is that injury?”

Lanning frowned, “What would a robot know about that” and then he caught himself with a gasp.

“You’ve caught on, have you? This robot reads minds. Do you suppose it doesn’t know everything about mental injury? Do you suppose that if asked a question, it wouldn’t give exactly that answer that one wants to hear?”

Anyway, that’s one of these great passages where the scientists are realising that one can’t programme a notion of injury or harm, simply, formally, into a computer because it requires interpretation.

The three laws seem to be a way of trying to program in ethics – let’s solve all sort of moral conundrums about people tied to tram tracks with some simple rules. How useful are those rules in your discussions of AI ethics?

We have to remember, the three laws of robotics were a literary device for him, so I think that a lot of times in our kind of contemporary world, we’re seeking out a moral panacea for the problems that are raised by artificial intelligence, but for Asimov, he really intended the laws to be an occasion for reflection on the human impacts of technology. For me, it sort of casts a floodlight on the need to actually think about automated systems as automated, as following prescribed rules and the limitations of that. In other words, the system might have prescribed rules, but the system won’t be moral in the same way that humans are moral because humans have to interpret what things like ‘harm’ or ‘human’ or ‘humanity’ mean.

And specifically, if you’re trying to program a machine to make a decision which may include ethical components, what are the kind of challenges that people are facing?

Well, I think first off, there’s the challenge of thinking about where the values are going to come from that are going to inform the programing or the behaviour of the instrument. The way that an automated system is designed derives from all of the values, all of the human choices of those who are involved in its design, production and implementation, and so we have a big set of dilemmas here about who’s making the technology. Are the makers of the technology representative of the world that the technology will impact?

The kind of machines and robots that we have today, we don’t have sort of humanoid robots wandering around helping us with everyday tasks, but we do have things that we’re trying to think about like, self-driving cars is the one that always comes up. What kind of ethical problems are we faced right now?

I think we live in a world where we are increasingly subjected to the decisions of automated systems. We live in an increasingly prediction-oriented society where you’ve got a lot of large-scale algorithms that are anticipating or pre-empting bits of our behaviour. Just think about the various social media outlets that use curatorial algorithms and that world is not necessarily a world where the automated systems are our companions, and I think that would have horrified Asimov. I think that for him, when we live in a world that is algorithmically steered, we’ve lost that component of human agency and human freedom that he saw at the very core of what it is to be human and what it is to actually have and use technology.

And what kind of a future do you see? Where do you see this going, in terms of robots being tools or being used to predict and control and influence?

So, one of the interesting problems that comes up across the stories is this notion of a Frankenstein complex, a kind of irrational gut feel, that in a sense these are just kind of monsters that are going to supplant humans and come to rule the world. The creatures come to take over the creators. And for Asimov, one of his bigger picture thoughts was that we need to overcome this kind of fantasy. For him, robots and robotics was just automation. They were tools. And I think what that means is we need to pay attention to what Asimov says and we need to think of the ways in which machines aren’t necessarily going to be monstrous agents of the future. Rather, think of them as allies, automated allies, that can help us as tools to build a better world together.

بعد از چندین بار تکرار و تمرین، برای تقویت مهارت شنیداری و یادگیری کلمات هوش مصنوعی و تسلط بیشتر بر آن، دوباره متن را همراه با مشاهده معنی کلمات جدید که در پایین آمده است، گوش کنید.

 

لیست کلمات معنی فارسی معنی انگلیسی به انگلیسی
معنی butler پیشخدمت male head servant
معنی replicated coffee قهوه ماشینی، قهوه ای که توسط دستگاهی مثل ربات تهیه می شود
معنی Artificial intelligence هوش مصنوعی
معنی science-fiction علمی-تخیلی
معنی ethics اصول اخلاقی یا ارزشی set of moral principles or values
معنی dubious مبهم، دوپهلو uncertain, doubtful
معنی unapologetic بدون عذرخواهی، بدون اظهار تاسف not apologetic, not expressing remorse
معنی harassment مزاحمت، آزار bothering, tormenting, pestering
معنی set out to  مراجعه کردن، شروع کردن went to, began to
معنی stacks انبوهی
pile, mound
معنی antique خیلی قدیمی very old, ancient; old-fashioned, not modern
معنی reckon حدس زدن، تخمین زدن
estimate, guess
معنی alongside در کنار
beside
معنی alien بیگانه، غریبه stranger, outsider
معنی influence تاثیر، اثرگذاری effect, impact, action of a person or thing which affects another
معنی sarcastically با تمسخر، با طعنه caustically, derisively, ironically, satirically
معنی deflation کاستن، پایین آوردن، تحقیر decrease in the quantity of money in circulation
معنی  blasting نابودن کردن destroy, ruin
معنی conundrums پازل، معما puzzle, riddle, problem
معنی literary علمی، دانشگاهی scholarly, educated
معنی panacea درمان همه دردها، داروی سحرآمیز cure-all, medication which can heal any problem
معنی reflection تفکر، اندیشه idea, concept; thought, notion
معنی floodlight واحد روشنایی، نورافکن powerful light for the illumination of large areas; lighting unit
معنی prescribed تجویز شده recommended, advised;
معنی ethical components عناصر اخلاقی
معنی dilemmas معما، دلیما، تصمیم دشوار difficult situation, difficult decision
معنی self-driving خودران
معنی automated خودکار
معنیprediction-oriented پیش بینی محور
معنی outlet رابط electrical source for connecting appliances
معنی curatorial محافظتی، نگهدارنده of or pertaining to a curator or his duties
معنی companion همراه، یاری کننده one who assists or accompanies
معنی steer به جلو راندن، به جلو حرکت دادن cause a vehicle to move in a particular direction
معنی supplant جایگزین کردن با دیگری replace one thing with another
معنی take over تسلط پیدا کردن
معنی monstrous فوق العاده بزرگ extremely large
معنی allies متحدان

 

مطلب پیشنهادی

معنی it's none of my business

معنی it’s none of my business

برای درک معنی it’s none of my business لازم است معنای somebody’s business را بفهمیم. …

دیدگاهتان را بنویسید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *