CS4Some

Incredibly relevant to this conversation is the previous discussion we had about what field computer science should fall into. Is it engineering, science, or art? Or something else? In my post on that topic, I describe how what most people think of as computer science can be categorized as engineering, science, or art, and it depends on what exactly you’re doing. The relevance to this topic is that each classification should be treated differently when it comes to primary/secondary education.

Software development has many similarities to art. Much like artists, developers work to hone a set of skills applicable to the type of work they want to produce. JavaScript or oil painting, Objective-C or pastels, C++ or sculpture, each category requires some practice and knowledge that can largely be developed by doing. As the developer creates a product, he iterates many times through different designs and tactics, testing on an audience along the way. Painters begin with a sketch (like a UML diagram?), then apply layers and layers of paint to the canvas, sometimes painting over old features to give way to new ones. Based on this comparison, I don’t believe software development should be required of all students, but I do think it should be offered, just like art courses.

Software engineering is, as its name indicates, a field of engineering. I think it can be lumped appropriately in with other types of engineering, especially in the sense that you do not really start taking classes for it until college. Math, chemistry, and physics courses in high school do prepare engineers for the college work to an extent, but at least at Notre Dame, engineers are required to take all of those classes at the college level (could be via AP), so there’s no real need to mandate them during high school. Again, I’d advocate the CS4All Schools approach, as opposed to CS4All Students.

Finally, we get to computer science – the real science-y stuff of proofs and hypotheses. In my experience, this is the most difficult section of the field to grasp. It certainly necessitates a higher level of thinking than software development or learning how to write a microcontroller in C as an engineer. So initially, I’d be inclined to say it shouldn’t be required of high school students, but most high schools do require chemistry and physics, which definitely get pretty science-y. It is difficult to argue that chemistry and physics are universally applicable, regardless of occupation. I can understand an argument for biology, but the vast majority of people won’t need to know about alpha and beta particles in radiation equations (I don’t even remember if those are the right terms).

In sum, and more directly addressing the questions from the assignment, coding is not the new literacy. It need not be required of all students. It should, however, be offered in its variety of forms to give young students the exposure to one of our society’s biggest fields of development.

A big concern of both anti- and pro-CS4All advocates is getting qualified teachers. I don’t find the argument that no one would want to teach high school CS because they can always make more money in the industry very convincing. With the volume of college graduates emerging with CS degrees, there are definitely some who are more interested in interfacing with people than just a screen. And there are plenty of people who aren’t motivated solely by money. The argument that adequately training CS teachers will be difficult is more important. I imagine there’s not much research about how students learn best in CS, certainly less than in other fields.

Addressing the last question, the camel paper seemed pretty convincing that some people just aren’t cut out for programming, but I really don’t have anything else to base an opinion off of. Regarding the second part, however, there is no need for everyone to learn to code. I thought Jason Bradbury made a nice point about how in X amount of years, human coding will be obsolete. It could be sooner than one might think based on the rapid development of AI…

Standard

walking a mile in things you didn’t realize were someone else’s shoes

In the past, when I read comments from online trolls that spew vitriol, hate, and other evils, I was dumbfounded. How could people be so mean to other people?

These trolls aren’t just taking a jab at an author or making a joke at their expense – the comments are some of the ugliest, darkest sentences you could imagine. I’ve seen the former in person, and while it’s not always nice, it’s at least a natural reaction. It’s the fact that the trolls’ comments go so far above and beyond a reasonable criticism that befuddled me (I also felt angry, but more so, I was confused).

There’s something to be said for the distance the internet makes you feel; commenting on another user’s post feels safe when you’re all alone in your home, like you can’t really be held accountable. But that still doesn’t seem to justify the passionate (and yet, removed) loathing that the comments reek of.

Fortunately for my confusion, Lindy West’s story shed some light on the matter. Her interaction with her “cruelest troll” was very interesting. The comments the troll made and the actions he took were drastic overreactions and destructive personal attacks – the type of stuff that actually makes you feel sick inside – “gratuitous online cruelty,” as West aptly puts it. So at least for me, I’ve got this picture of a deformed, malfunctioning human being sitting behind a computer. What normal person could write those remarks?

But then we hear from the guy, and he’s incredibly remorseful. He apologizes profusely, he expresses mature, human thoughts, and he even donates money to a charity relevant to West’s father (who was a target of his trolling). That does not build onto the original picture I had of the guy. While I don’t feel particularly remorseful, myself, for dismissing him initially because of the depraved things he did, I do feel a bit ignorant, for not recognizing that there’s a reason he acted the way he did. As inexplicable as one’s actions may seem, it is rarely the case they don’t have an understandable motivator behind them. In the case of internet trolls, you just really have to try in order to imagine why they’re acting the way they do.

One of our authors observed that, “the most common tactic was to ‘diagnose’ [her].” The trolls would interpret the author’s sympathetic attitude toward a mother cooking for her family as the author, herself, needing sympathy. This is a bizarre, backwards diagnosis. It seems to me that the trolls are trying to “diagnose” the author because they want to be diagnosed, themselves. They want someone to understand what they’re going through.

So can we empathize with trolls? It’s tough, no doubt, but I think it would be useful.

Regarding some of the other questions:

What ethical or moral obligations do technology companies have in regards to preventing or suppressing online harassment (such as trolling or stalking)?

I’d like to think that people at those companies have moral intuitions, so they should feel obligated to make their service safe for users, but I’m not sure they legally have any obligation. Seems in their best capitalistic interests to suppress harassment too though, since one or two high-profile harassment cases could really hurt their business.

Is anonymity on the Internet a blessing or a curse? Are “real name” policies useful or harmful in combating online abuse?

One of the articles mentioned that “real name policies” aren’t super effective and that they do more harm than good. I can see the arguments against them, like protection from government surveillance or other identity protections, but I think it’s undeniable that people behave better without the curtain of anonymity. The novel, The Circle, by Dave Eggers, paints a convincing picture of a society in which everyone has a single social media account that is linked to their real name, and in it, there are no evident trolls. Most people just won’t act like degenerate villains when others know their identity.

Is trolling a major problem on the Internet? What is your approach to handling trolls? Are you a troll?!?!?

It’s definitely a problem. I haven’t had enough personal exposure to know if it’s a major problem. The only personal experience I have is when I used to write articles for Bleacher Report. I’d usually get a few comments on articles to the effect of, “this kid’s an idiot, he has no idea what he’s talking about.” While slightly offended and disappointed at the lack of constructive feedback, I couldn’t really argue because I didn’t have any idea what I was talking about. It was part of Bleacher Report’s model to let anyone write who could form complete sentences.


My main point comes back to the “walk a mile in someone else’s shoes” adage. We struggle to walk in the shoes of an internet troll because it’s so difficult to tell what the shoes look like. They’re hidden by a digital filter, and they don’t really want to be seen. But at the end of the day, they’re shoes that fit humans – it just takes a little more effort to put them on.

I didn’t really get into the question of whether or not it’s worth making the effort to put the shoes on.

Standard

ai

Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.

Teach a person that 6 x 2 = 12, and he will not forget it, teach a person how to multiply, and he will be able to figure out 6 x 3 = 18.

Tell a machine to find the next biggest prime, and it will make faster computations than are humanly possible to do it, tell a machine why you want the next biggest prime, and it will figure out a more clever way of encryption.


 

Artificial intelligence (AI) is terrifying, and our species’ foremost thinkers (Elon Musk, Stephen Hawking) have warned that it is the single greatest threat to humanity. AI can be referred to as general, narrow, strong, or weak, but roughly speaking, it is the ability for computers to be “smart.” They don’t have to think like humans necessarily, but they do have to produce results that most of us would classify as intelligent.

As one of the readings pointed out, we have yet to discover something in our brain that is not replicable in computers. We haven’t figure everything out about our brains, but there is no empirical evidence, thus far, of any intangible quality that they might possess.

So the current state of the battle that pits human brains against machine brains is that humans have the better structure (billions of neurons arranged in layers), but computers have the faster processor (transistors fire more quickly than the chemical gradient in our neurons). For now, our more robust structure seems to give us the advantage, but there is no limit to the size of computers (i.e. number of transistors), and we could conceivably mimic the structure of our brains in computers eventually.

That leads to the conclusion that computers will surpass humans in terms of intelligence, which is scary. The scarier part, to me, is that computers could potentially keep learning, becoming more intelligent than we can fathom, but that doesn’t seem definite.

Machine learning is the trick for computers to speed past us. AlphaGo, the Google creation that handily beat the world’s best (human) Go player, is a technical marvel. It analyzes game situations at an absurd rate, making tiny adjustments to its algorithm to play the game – in other words, it is always learning, but not like a human…

It does not take breaks. It does not have days when it just doesn’t feel like practicing, days when it can’t kick its electronic brain into focus. Day in and day out, AlphaGo has been rocketing towards superiority, and the results are staggering.

AlphaGo will learn as long as it’s connected to a power source. It is worth noting, however, that it is learning with human rules. It may evolve to use new tactics to learn, but they will have been derived from the initial rules that were programmed by humans, which leads me to believe that humans could eventually reach those tactics also – they aren’t unfathomable to our minds, albeit, many years down the road. And we wouldn’t be getting much help from the computers, since as the Atlantic article put it, we don’t really understand how AlphaGo is learning; the adjustments it’s making aren’t intuitive to us.

As Christopher Moyer puts in The Atlantic, “AlphaGo isn’t a mysterious beast from some distant unknown planet. AlphaGo is us… AlphaGo is our incessant curiosity. AlphaGo is our drive to push ourselves beyond what we thought possible.”


 

So the question is, if we teach one of our supercomputers to learn, will it learn like the best human pupil we’ve ever seen, or will it take on a mind of its own that is the first domino in the fall of the human race?

Standard

capitalism again (or net neutrality)

Net neutrality is preventing internet service providers (ISP’s) from taking sides when it comes to providing users with content from content providers (CP’s); that’s a lot of using the same few words, which illustrates the apt naming of ISP’s and CP’s, but it is also a little confusing. In simpler terms, net neutrality is keeping the internet even.

An uneven internet would be one on which certain content loaded faster because those content providers are receiving preferred treatment. That preferred treatment could be the result of paying a premium, a special business relationship, or something else, and it is known, formally, as paid prioritization. Paid prioritization is the chief interest at stake when we debate net neutrality.

Those in favor of net neutrality argue that paid prioritization is a way for the giant ISP’s to play favorites, and a CP that doesn’t have a preferred relationship with an ISP is at a disadvantage. This is potentially problematic for a couple of reasons.

First, from a fairness perspective, our biggest ISP’s are vertically integrated throughout the internet-relevant industries, so they have their own associated CP’s that would undoubtedly get the preferred treatment without net neutrality. This would put other CP’s at a business disadvantage, regardless of what the end user wants, and stinks a little bit of monopoly.

Second, from an innovation perspective, CP’s just starting out would have trouble breaking into the “fast lane” (the paid prioritization lane), resulting in a negative feedback loop: more difficult to get new customers -> less money -> less paid prioritization -> fewer new customers, ad inf.

Opponents to net neutrality largely support a freer market approach, arguing that the industry innovation is really occurring at the ISP’s, and that government regulation will impede progress. They argue that if customers are unhappy with a particular ISP’s prioritization (e.g. slowing down Netflix), they will switch to a different ISP and the natural market forces will eventually lead to the best distribution of ISP resources.

The problem with that reasoning is that many people in America don’t have the freedom to choose between ISP’s – you’re stuck with whoever has the infrastructure developed in your area. This characteristic of the telecom industry illustrates that it is very similar to other “natural monopolies” in the US, and so it requires government regulation.

Monopolies are one of capitalism’s market failures and are dealt with by government regulation. Natural monopolies occur when it makes the most sense for just one company to develop infrastructure in a geographical area. You don’t want four different cable companies all digging their own cable lines to every house in a neighborhood – it would be a waste of resources. But if there’s only one company that provides cable in that neighborhood, they could charge customers exorbitant amounts of money for their service because there would be no competition. That’s why government regulation is important for natural monopolies – the government can ensure a fair price for customers.

The internet requires a serious amount of infrastructure, so a natural monopoly makes sense, but it’s important that the government has a hand in how ISP’s conduct business.

I read all of the arguments against net neutrality, and they were all across the board.

David Cohen, an executive at Comcast, argues that the government can’t possibly keep up with modern technology so they shouldn’t bother trying to regulate it. The executives at Comcast likely no little about the technology their company uses to actually provide people with internet, but they know enough to make important decisions, which they learn from the people below them who actually know how the stuff works. Likewise, politicians don’t need to know how every piece of the puzzle fits together, they just need someone who does get it to explain to them the big picture.

Grant Babcock argues that since there is “no dire threat to freedom hinging on [net neutrality],” it is not the government’s business to be involved. I guess he probably has bigger qualms with the government, though, than net neutrality. Like NASA’s $19 billion budget in 2016, or the fact that the government pays for highways.

The crown jewel, however, was Jeffrey Dorfman’s invective against net neutrality. He expressed annoyance with “poor analogies” surrounding net neutrality, and then gave us these:

We win from having multiple flavors of ice cream in the store. We benefit from the large variety of cars available for purchase. The fact that most people cannot afford some of those models does not mean they should be removed from sale. Similarly, the fact that some businesses or consumers may choose to pay for better access to the Internet is not a bad thing. Some people pay more to fly first class, but they do not interfere with my travel in coach.

They have pretty much nothing to do with net neutrality.

 

Standard

Here is a link to the letter that Connor Brant, Alex Hansen, and I wrote to the editor of the Observer.


Encryption is a fundamental right only as it relates to privacy. Obviously it’s not something the Founding Fathers could’ve foreseen, but the magic in their words lies in the applicability to ever-changing technology and culture. Encryption is ultimately a privacy tool, but there’s no reason encryption, itself, should be a fundamental right. Additionally, privacy is a fundamental right, but that doesn’t mean it can’t be suspended by the government when it’s warranted.

The issue of complete, irreversible encryption is, in my opinion, only really an issue because of the partial distrust of our government. If American citizens trusted the government entirely, there wouldn’t be any questions asked when the government requested Apple to unlock an iPhone for an investigation. So, to me, encryption isn’t a big issue when aligning with a politician. To me, the heart of the issue is the heart of the candidate – choosing a politician I trust.

If I trust the president of the country, I may disagree with certain individual stances like encryption or immigration or climate change, but I am willing to go along with what he or she decides because I trust his or her judgement.

In any struggle, I like to visit the extremes of either side. With personal privacy, the extreme is that the government has no access to any of its citizens’ data (physical or digital) and is essentially powerless in investigations. With extreme national security, I imagine a surveillance state in which the government has access to all of the data they could ever want, including what each citizen is doing at every moment of every day. They’re both scary, but if we want a government at all, we should want it to have efficacy in dealing with problems we can’t deal with individually (catching a murderer, stopping a terrorist plot), so I definitely lean toward the national security side.

Again, it boils down to trust for me. If everyone in the country trusted the government to obtain our personal data responsibly (i.e. only when they need it for an investigation and not to use it against us unnecessarily), there would be no debate.

 

Standard

pirates: black beards or neckbeards

The Digital Millennium Copyright Act (DMCA) is a broad stroke effort to prevent unauthorized access to and copying of copyrighted material. There are a couple of provisions that dictate that, but the overall effect of the act, as noted by Kerry Maeve Sheehan, is that it’s blunderingly trying to keep up with technology that far outpaces the speed of lawmakers’.

Piracy, which has generally come to mean using something without permission (like listening to a copyrighted song without paying for it), is one of the principal targets of the DMCA, due to its popularity with the internet as a vehicle. The way the DMCA seeks to prevent piracy is by making illegal any attempt to “circumvent” digital rights management (DRM), and heightening the pre-existing penalties for doing so. (Circumvent is in quotes because pretty much every reading uses that term, and I believe it is purposefully vague.)

So the main idea is that the government is trying to prevent illegal music and movie downloads, which is a noble cause, but they aren’t really equipped to do it. Russell Brandom points out that “YouTube relies on user-generated flags to enforce its policies, which can make violations maddeningly inconsistent.”

This brings up the concept of Safe Harbor. The DMCA provides provisions to protect companies like YouTube, Facebook, and Google from copyright infringement. This, at least, is a reasonable idea, since making these tech giants liable for users’ copyright infringements would only serve to discourage the development of novel file-sharing technology.

What the Safe Harbor provisions dictate is that an online service provider (OSP), basically any website, is not responsible for illegal activities conducted by users, provided the OSP meets a few regulations: (1) the company must have no knowledge of the infringements, (2) the company must have a copyright policy, and (3) the company must have an agent to whom copyright claims should be directed. (This information conveniently gathered from Muso’s DMCA explanation.)

Due to the sheer volume of video on YouTube, it is reasonable to believe they can’t have knowledge of all copyright infringements hosted on their site, but they do have to make an effort, hence the “user-generated flags.” It is easy to see how that reporting system could become immensely frustrating for YouTube’s most frequent users.

And what motivation is there for users to flag copyrighted material on YouTube? It must primarily be those owning the copyrights as well as a handful of speciously-white knights – everyone else enjoys the material and is complacent enough to not ask questions. This behavior falls somewhere on the spectrum of piracy. At the other end is people who actively seek, steal, and disseminate copyrighted material. Somewhere in between are people like me, who download the audio tracks of YouTube videos as MP3 files and occasionally make use of torrenting sites to access copyrighted material.

In my opinion, people in that category aren’t actively seeking to stick it to the recording labels or the movie studios, those behaviors are more characteristic of the pirates Stephen Witt describes: “The founders of [The Pirate Bay] were ideological in nature, seeking a revolution in copyright law.”

As you shift slightly more toward the harmless end of the piracy spectrum from the founders, you get active torrenters who spend considerable amounts of time seeding files and ensuring there’s always enough content for people – “the last of the ideologues: anti-profit, pro-freedom political dissidents… at considerable personal risk.”

But once you get back to my category, people are just doing it because it’s the easiest and cheapest option. With services like Netflix and Spotify, the easiest and cheapest option is changing for a lot of people. Whether the line is drawn at <$10/month or the fact that Netflix accounts are naturally shareable, people have started to give up piracy in favor of streaming services. But again, the motivation is not noble or moral, it is simply ease of access.

I think piracy might start to fade with the next generation, but I don’t think it’s a bad thing. To conclude, here are my thoughts, summed up nicely by Joss Stone:

Yeah, I love [Piracy]. I think it’s brilliant and I’ll tell you why. Music should be shared. […] The only part about music that I dislike is the business that is attached to it. Now, if music is free, then there is no business, there is just music. So, I like it, I think that we should share.

It’s ok, if one person buys it, it’s totally cool, burn it up, share it with your friends, I don’t care. I don’t care how you hear it as long as you hear it. As long as you come to my show, and have a great time listening to the live show it’s totally cool. I don’t mind. I’m happy that they hear it.

Standard

on patents, aka progress stiflers

Thomas Jefferson and Elon Musk, among the foremost thinkers in their respective eras, both believe that patents stifle progress.

If a bright mind comes up with a clever idea in his field, it seems obvious that the field will move further along (which should hopefully be the ultimate goal of inventing and ideating) if that idea is shared with colleagues than if it is protected by patent laws. If the original ideator is more concerned with becoming famous for his or her particular idea or making money off of it, there is probably less incentive to share, but that’s only because of the way the patent system is perceived.

Elon Musk points out that releasing all of Tesla’s patents may actually boost the company’s economic position, by attracting more smart people to the technology base. More importantly, Musk believes that sharing the patents will further the cause behind Tesla, the “why” of his company, which is to create a more sustainable future in cars. That ought to be every inventor’s primary motivation, to further a cause, and patents should support that. To promote human growth in technology and culture is one of the World Intellectual Property Organization’s (WIPO) stated aims, but it’s efficacy is uncertain.

Another matter, addressed by Thomas Jefferson, is the use of the term “property” in describing an idea. Jefferson insists that property, as we know it most commonly (house, belongings, land), is only truly “owned” when we are currently occupying or using it. Other times, it is only owned as a result of social construction (i.e. if you aren’t standing on a piece of land, there’s no physical law that says I can’t come stand on it and call it mine, just the social laws and norms we’ve built up over the years). So how does an idea fit in?

How ideas work physically was not well understood in Jefferson’s time, and it is understood only marginally better now, but Jefferson makes a nice point,

That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature…

He goes on to relate the spread of ideas to sharing a flame. If I have a candle, I can light 50 other candles without diminishing my flame at all. And if I have an idea, I can share it with the world (quite easily now, thanks to the Internet) without losing the integrity of the idea, myself. Then, not only do I have a shot at coming up with the next great idea, but anyone who understands my idea also has that opportunity, which undoubtedly gives us a better shot at progress.

Getting back to the WIPO, their stated purpose of patents, or more generally, the protection and promotion of intellectual property, is threefold:

  1. Humanity needs technological and cultural progress.
  2. Protection of IP encourages further investment in ideas.
  3. Protection and promotion also create economic growth.

Elon Musk, Thomas Jefferson, and I would argue that you don’t need patents to promote growth in culture, technology, and economy, and that the effect patents actually have, regardless of intention, is to encourage patent trolls, who can end up costing people more money than their patent is worth, and to inhibit other inventors and ideators from using novel ideas.

In short, patents stifle progress. They detract from the nobility of invention, and they suppress the free flow of information between colleagues.

“He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.” – Thomas Jefferson

Standard

capitalism = ads

Forbes’ list of the most valuable brands in the world in 2015 has Google at #3 and Facebook at #10. Other top ten brands are Coca Cola, McDonalds, Toyota, Samsung, and Apple. There’s a significant difference between Google and Facebook and the rest of the companies – the amount of time it would take people to formulate an answer if you asked them what the company sells.

Coke, fast food, cars, electronics, and iPhones are all easy answers. But what are they going to say for Google and Facebook? It’s not something we think about when we interact with them daily.

With a little help, most people could probably come up with the fact that Google and Facebook make the majority of their money from advertising. But it would take a little more thought to grasp the implications of Google and Facebook making all their money from advertising…

Of course, they get lots of page views, so ads seem worthwhile, but the ads are most valuable when they’re targeted. And to deliver targeted ads, advertisers need to know about you. Your likes, your interests, your location at any time during the day.

So a company, whose primary interest is to serve its shareholders (most often by making money), has all kinds of useful data about you. And it’s definitely useful, as illustrated by the study about Facebook and “intimate” data. They showed you can tell a great deal about a person with just a little information about a lot of other people.

Assume that Google and Facebook know everything about you – your interests, your relationships, what you’re passionate about, what makes you angry, how you spend weekends, what you do after work – because it’s not that far from the truth. Is that a bad thing?

Even if they’re selling that information to advertisers who then manipulate ads to increase their efficacy, the advertisers don’t care who John Smith is and why he spends so much time building model airplanes. And you don’t care because you have AdBlock and don’t even see the ads.

One danger is the information passing into the wrong hands, to someone who could use it maliciously. That is bad news, and it might be worth researching a company before passing a chunk of personal information to them to store, but you might justify the risk by assuming no one would ever want your data. You’ve got nothing to hide (second week in a row this has come up – might be worth addressing in further depth if it comes up again).

And you can’t really blame Google and Facebook. Without advertising revenue, they wouldn’t be able to provide you with their service. Well couldn’t they get money some other way? They could make you pay a monthly fee. Or pay for a premium account. Or sell a lot of sweatshirts. But it’s probably safe to assume they’ve considered those options and determined that advertising, while it sacrifices privacy to a certain degree, is the best way for them to make money.

If there was a paradigm shift in the way the public perceives Google and Facebook, and advertising was viewed as evil, it would make economic sense to ditch the ads and try out something else. But as long as we’re content to trade information for the use of a service, they’re content to work harder to give us ads we actually care about.

 

Standard