CS4Some

Incredibly relevant to this conversation is the previous discussion we had about what field computer science should fall into. Is it engineering, science, or art? Or something else? In my post on that topic, I describe how what most people think of as computer science can be categorized as engineering, science, or art, and it depends on what exactly you’re doing. The relevance to this topic is that each classification should be treated differently when it comes to primary/secondary education.

Software development has many similarities to art. Much like artists, developers work to hone a set of skills applicable to the type of work they want to produce. JavaScript or oil painting, Objective-C or pastels, C++ or sculpture, each category requires some practice and knowledge that can largely be developed by doing. As the developer creates a product, he iterates many times through different designs and tactics, testing on an audience along the way. Painters begin with a sketch (like a UML diagram?), then apply layers and layers of paint to the canvas, sometimes painting over old features to give way to new ones. Based on this comparison, I don’t believe software development should be required of all students, but I do think it should be offered, just like art courses.

Software engineering is, as its name indicates, a field of engineering. I think it can be lumped appropriately in with other types of engineering, especially in the sense that you do not really start taking classes for it until college. Math, chemistry, and physics courses in high school do prepare engineers for the college work to an extent, but at least at Notre Dame, engineers are required to take all of those classes at the college level (could be via AP), so there’s no real need to mandate them during high school. Again, I’d advocate the CS4All Schools approach, as opposed to CS4All Students.

Finally, we get to computer science – the real science-y stuff of proofs and hypotheses. In my experience, this is the most difficult section of the field to grasp. It certainly necessitates a higher level of thinking than software development or learning how to write a microcontroller in C as an engineer. So initially, I’d be inclined to say it shouldn’t be required of high school students, but most high schools do require chemistry and physics, which definitely get pretty science-y. It is difficult to argue that chemistry and physics are universally applicable, regardless of occupation. I can understand an argument for biology, but the vast majority of people won’t need to know about alpha and beta particles in radiation equations (I don’t even remember if those are the right terms).

In sum, and more directly addressing the questions from the assignment, coding is not the new literacy. It need not be required of all students. It should, however, be offered in its variety of forms to give young students the exposure to one of our society’s biggest fields of development.

A big concern of both anti- and pro-CS4All advocates is getting qualified teachers. I don’t find the argument that no one would want to teach high school CS because they can always make more money in the industry very convincing. With the volume of college graduates emerging with CS degrees, there are definitely some who are more interested in interfacing with people than just a screen. And there are plenty of people who aren’t motivated solely by money. The argument that adequately training CS teachers will be difficult is more important. I imagine there’s not much research about how students learn best in CS, certainly less than in other fields.

Addressing the last question, the camel paper seemed pretty convincing that some people just aren’t cut out for programming, but I really don’t have anything else to base an opinion off of. Regarding the second part, however, there is no need for everyone to learn to code. I thought Jason Bradbury made a nice point about how in X amount of years, human coding will be obsolete. It could be sooner than one might think based on the rapid development of AI…

Standard

walking a mile in things you didn’t realize were someone else’s shoes

In the past, when I read comments from online trolls that spew vitriol, hate, and other evils, I was dumbfounded. How could people be so mean to other people?

These trolls aren’t just taking a jab at an author or making a joke at their expense – the comments are some of the ugliest, darkest sentences you could imagine. I’ve seen the former in person, and while it’s not always nice, it’s at least a natural reaction. It’s the fact that the trolls’ comments go so far above and beyond a reasonable criticism that befuddled me (I also felt angry, but more so, I was confused).

There’s something to be said for the distance the internet makes you feel; commenting on another user’s post feels safe when you’re all alone in your home, like you can’t really be held accountable. But that still doesn’t seem to justify the passionate (and yet, removed) loathing that the comments reek of.

Fortunately for my confusion, Lindy West’s story shed some light on the matter. Her interaction with her “cruelest troll” was very interesting. The comments the troll made and the actions he took were drastic overreactions and destructive personal attacks – the type of stuff that actually makes you feel sick inside – “gratuitous online cruelty,” as West aptly puts it. So at least for me, I’ve got this picture of a deformed, malfunctioning human being sitting behind a computer. What normal person could write those remarks?

But then we hear from the guy, and he’s incredibly remorseful. He apologizes profusely, he expresses mature, human thoughts, and he even donates money to a charity relevant to West’s father (who was a target of his trolling). That does not build onto the original picture I had of the guy. While I don’t feel particularly remorseful, myself, for dismissing him initially because of the depraved things he did, I do feel a bit ignorant, for not recognizing that there’s a reason he acted the way he did. As inexplicable as one’s actions may seem, it is rarely the case they don’t have an understandable motivator behind them. In the case of internet trolls, you just really have to try in order to imagine why they’re acting the way they do.

One of our authors observed that, “the most common tactic was to ‘diagnose’ [her].” The trolls would interpret the author’s sympathetic attitude toward a mother cooking for her family as the author, herself, needing sympathy. This is a bizarre, backwards diagnosis. It seems to me that the trolls are trying to “diagnose” the author because they want to be diagnosed, themselves. They want someone to understand what they’re going through.

So can we empathize with trolls? It’s tough, no doubt, but I think it would be useful.

Regarding some of the other questions:

What ethical or moral obligations do technology companies have in regards to preventing or suppressing online harassment (such as trolling or stalking)?

I’d like to think that people at those companies have moral intuitions, so they should feel obligated to make their service safe for users, but I’m not sure they legally have any obligation. Seems in their best capitalistic interests to suppress harassment too though, since one or two high-profile harassment cases could really hurt their business.

Is anonymity on the Internet a blessing or a curse? Are “real name” policies useful or harmful in combating online abuse?

One of the articles mentioned that “real name policies” aren’t super effective and that they do more harm than good. I can see the arguments against them, like protection from government surveillance or other identity protections, but I think it’s undeniable that people behave better without the curtain of anonymity. The novel, The Circle, by Dave Eggers, paints a convincing picture of a society in which everyone has a single social media account that is linked to their real name, and in it, there are no evident trolls. Most people just won’t act like degenerate villains when others know their identity.

Is trolling a major problem on the Internet? What is your approach to handling trolls? Are you a troll?!?!?

It’s definitely a problem. I haven’t had enough personal exposure to know if it’s a major problem. The only personal experience I have is when I used to write articles for Bleacher Report. I’d usually get a few comments on articles to the effect of, “this kid’s an idiot, he has no idea what he’s talking about.” While slightly offended and disappointed at the lack of constructive feedback, I couldn’t really argue because I didn’t have any idea what I was talking about. It was part of Bleacher Report’s model to let anyone write who could form complete sentences.


My main point comes back to the “walk a mile in someone else’s shoes” adage. We struggle to walk in the shoes of an internet troll because it’s so difficult to tell what the shoes look like. They’re hidden by a digital filter, and they don’t really want to be seen. But at the end of the day, they’re shoes that fit humans – it just takes a little more effort to put them on.

I didn’t really get into the question of whether or not it’s worth making the effort to put the shoes on.

Standard

ai

Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.

Teach a person that 6 x 2 = 12, and he will not forget it, teach a person how to multiply, and he will be able to figure out 6 x 3 = 18.

Tell a machine to find the next biggest prime, and it will make faster computations than are humanly possible to do it, tell a machine why you want the next biggest prime, and it will figure out a more clever way of encryption.


 

Artificial intelligence (AI) is terrifying, and our species’ foremost thinkers (Elon Musk, Stephen Hawking) have warned that it is the single greatest threat to humanity. AI can be referred to as general, narrow, strong, or weak, but roughly speaking, it is the ability for computers to be “smart.” They don’t have to think like humans necessarily, but they do have to produce results that most of us would classify as intelligent.

As one of the readings pointed out, we have yet to discover something in our brain that is not replicable in computers. We haven’t figure everything out about our brains, but there is no empirical evidence, thus far, of any intangible quality that they might possess.

So the current state of the battle that pits human brains against machine brains is that humans have the better structure (billions of neurons arranged in layers), but computers have the faster processor (transistors fire more quickly than the chemical gradient in our neurons). For now, our more robust structure seems to give us the advantage, but there is no limit to the size of computers (i.e. number of transistors), and we could conceivably mimic the structure of our brains in computers eventually.

That leads to the conclusion that computers will surpass humans in terms of intelligence, which is scary. The scarier part, to me, is that computers could potentially keep learning, becoming more intelligent than we can fathom, but that doesn’t seem definite.

Machine learning is the trick for computers to speed past us. AlphaGo, the Google creation that handily beat the world’s best (human) Go player, is a technical marvel. It analyzes game situations at an absurd rate, making tiny adjustments to its algorithm to play the game – in other words, it is always learning, but not like a human…

It does not take breaks. It does not have days when it just doesn’t feel like practicing, days when it can’t kick its electronic brain into focus. Day in and day out, AlphaGo has been rocketing towards superiority, and the results are staggering.

AlphaGo will learn as long as it’s connected to a power source. It is worth noting, however, that it is learning with human rules. It may evolve to use new tactics to learn, but they will have been derived from the initial rules that were programmed by humans, which leads me to believe that humans could eventually reach those tactics also – they aren’t unfathomable to our minds, albeit, many years down the road. And we wouldn’t be getting much help from the computers, since as the Atlantic article put it, we don’t really understand how AlphaGo is learning; the adjustments it’s making aren’t intuitive to us.

As Christopher Moyer puts in The Atlantic, “AlphaGo isn’t a mysterious beast from some distant unknown planet. AlphaGo is us… AlphaGo is our incessant curiosity. AlphaGo is our drive to push ourselves beyond what we thought possible.”


 

So the question is, if we teach one of our supercomputers to learn, will it learn like the best human pupil we’ve ever seen, or will it take on a mind of its own that is the first domino in the fall of the human race?

Standard