does the government need a backdoor?

This is a really perplexing question for me. I read all of the articles and sat and pondered for a while, and still, no solution is overwhelmingly clear to me.

The government’s primary argument is for national security. They posit that Apple ought to cooperate and sneak a feature in to allow for a simple entry to the locked phone so that they can access data that will further their investigation of a terrorist attack.

On one hand, the government has been doing this for years – collecting data from people and corporations to aid in their investigations. For the most, part people have gone along with it. And for the most part, the government seems to handle the evidence responsibly.

On the other hand, the changing presence of technology in our society has morphed the playing field – it’s not as cut and dry as testifying that you saw someone do something. In order to turn the evidence over to the government, Apple would have to develop a modified operating system that would allow anyone in possession of the operating system to essentially break into any phone.

Of course, that software would likely be kept as secure as possible in Apple’s safe, but one of the biggest worries is that once the precedent is set, the government could run this procedure whenever it wants.

If we trust our government to use that power and the evidence that comes with it responsibly, it seems to me that it is an okay precedent. Just like we’re willing to let the government come search our homes (an invasion of privacy) if they have a search warrant, we ought to be willing to let them read our emails or look at our pictures.

But that’s a big if.


 

The whole dispute/investigation/case is a bit dramatic, in my opinion. I think this is an important debate to have (privacy vs. security) but framing it as “encryption is the reason the Paris attacks happened” vs. “giving the backdoor to the government is the end of your privacy forever” is overblown.

With regard to the government’s melodrama, regardless of whether or not the terrorists in Paris achieved their end with encryption, there shouldn’t be a reasonable expectation for the intelligence powers of the world to bust all potential attackers. People can be secretive with or without encryption (steganography).

In the truck driver sexual assault case that the government cites (in which an incriminating video on the truck driver’s phone led to a conviction), an investigation stalled by no access to the phone would proceed much in the same way the investigation would have proceeded 30 years ago, when the truck driver wouldn’t have been carrying a smart phone at all. Sure, there are obvious uses for subpoenaing smart phones, and it is vitally important for the government to do its best to keep up with the technology curve, but to rely solely on evidence from smart phones would be unsafe hyper-dependence.

With regard to Apple’s comments, it seems that 100% secure, end-to-end encryption might be too much if it conflicts with the government’s ability to access warranted evidence. Apple may feel compelled to offer customers the finest encryption service in the land, but they should also feel morally obligated to prevent evil acts from happening when it’s in their power.

I’m not suggesting they snoop around people’s data, but if it is determined that the FBI has warranted power to get the info, Apple (and other tech providers) should be able to comply.

I’ve never felt strongly about the “nothing to hide, nothing to fear” argument, but maybe that’s just because I haven’t seen it exploited before…

 

Standard

guide to the interview experience

Here is the interview guide I created: EthicsNDCSInterviewGuide.

The most important part of that guide is the final section: the decision.

Preparation for interviews takes place over the course of years; it is a cumulation of everything you have learned, all of your experiences, and the hundreds of hours of work you have put into developing your skills. Honestly, a week or a month before an interview is not enough time to seriously affect performance – there is so much more that factors in that has developed over the rest of your life.

The interview itself is almost entirely a direct result of preparation. Some people are good interviewers, some get uncomfortable talking about themselves or their work. Again, the only way that changes is with relevant experiences that come over time.

The decision, however, can swing any number of ways, depending on what you consider important, and it will have a very serious effect on the rest of your life. Past experiences factor into the decision by determining which factors you are likely to consider most, but I’m here to say, the people matter most.

Sure, you can narrow down your choices (if you are fortunate enough to have a lot) by location and role (e.g. I really don’t want to live in Texas, or this position is too technical for my liking), but like I said in the guide, regardless of what your job is, you’ll be working with teammates on a daily basis.

Do you like the people at the companies you’re considering? Do they have a palpable culture that you feel like you belong to? Those are important questions to ask and ones I wish I would’ve asked myself when applying to internships a couple years ago.


Regarding the question about whether colleges should adjust their curriculums to include more interview prep, no way.

What I like about Notre Dame’s CSE program is that it starts in what I would call the middle, with C. We work upward from a basic, earl, memory-intensive language to object-oriented C++ and then dynamic languages like Java and Python. We also drop down to the transistor level and work up to C via Logic Design, Computer Architecture, and Operating Systems.

We are given a pretty thorough education in how computing works (computer science or computing science?) which prepares us to do a wide variety of jobs. There are a lot of detailed skills we don’t pick up in (core) classes that could be useful in interviews, but I think it’s up to students to develop passions on the side for specific types of programming, from app development to cybersecurity to data mining.

Once you’re equipped with the basics from core CS classes, try out some new stuff and figure out what pops out at you. For me, it was mobile app development because of the ubiquity of smart phones. Developing an app for iPhone offers me a really unparalleled chance to express myself and (hopefully, positively) impact a lot of people.

 

Standard

Bradley Manning

What’s my opinion.

There are a couple of pretty big questions involved in this case that do not have definitive answers and so I can take an opinion:

Is it okay for the government to lie to the American people in order to more effectively carry out what it believes is in the people’s best interest?

I generally believe lying is okay when done for noble reasons, and it’s easy to conjure up scenarios in which lying is an okay thing to do. Say there is a serial killer loose and everyone is terrified to the point of a lower quality of life. To maximize utility, the government says they’ve caught the killer and puts an innocent person on display for the people to see. The quality of life of the people goes up despite the lie. All good?

Well, the real killer is still out there, and they’re sacrificing the integrity of the innocent person, which is actually an entirely different question, but stacking moral dilemmas atop one another seems pretty run-of-the-mill for the US government.

So next question (I’ll get back to the first one later): is it okay to violate the rights of one person in order to benefit many others?

Yes, as evil as that sounds. Please bear with me, however, as we descend into another level of moral dilemmas – the Trolley Problem! In brief, a trolly is hurdling down the tracks toward an unsuspecting crowd. If left undiverted, they will all die. But you can pull a lever that will divert the trolley onto a different track. On this track, there is but one unsuspecting person. So you can leave the crowd to die by your own inaction, or you can save them and actively draw blood on your own hands. Tough spot to be in…

I would pull the lever, because I would feel the pain of the crowd dying by my own inaction just as much as by my own action since I had the power to stop it. So I am actively sacrificing the right of the one person to survive as the cost of saving many others. So I would do what I described above with the fake serial killer. So  I think it’s okay for the government to lie to the American people in order to more effectively carry out what it believes is in the people’s best interest.

The dilemma is, unfortunately, rarely so easy to weigh (several lives versus one). I do believe, though, that a strong moral compass can guide government leaders to make the right decisions every time.

Where does Bradley Manning fit in? With all the access he had to military and civilian intelligence, he must have felt that the government’s lying was not worth the gain. Killing innocent civilians with near reckless abandon, for instance. Tragically, he could not resolve his moral quandary via the chain-of-command, so he was left to less ideal methods.

As he was on the cusp of whisteblowing in the advanced technology age, he didn’t hit everything spot on – you can’t expect him to. It’s unclear to me based on the sheer number of documents whether everything he leaked was necessary, but he did seek to right what many of us would agree was a wrong.

In doing so, he exposed the broken system of chain-reporting, which is a very valuable thing in itself. Hopefully that is fixed, so in the future, matters of moral ineptitude can be handled internally.

 

 

 

Standard

diversity is inherently good

Whether we’re talking about a diversity of choices for lunch, a diversity of companies that develop smart phones, or a diversity of people working together, diversity is good. Diversity allows people to develop different tastes, and people with different tastes keep life interesting.

Aldous Huxley’s A Brave New World depicts a world in which there is very little diversity, and it is decidedly an unpleasant one to live in. After all, humans are naturally expressive beings. At the highest level, we have a diversity of beliefs, and it trickles all the way down to a diversity of ways people tie their shoes or style their hair.

Mark Zuckerberg might argue that a diversity of wardrobe choices is draining, but as I claimed in my last post on burnout, trying to save energy by eliminating small choices means you are requiring too much energy for your big decisions – you need to moderate.

The value of diversity of people manifests itself when a team is working to create something. That is a very vague claim, but the idea is that the more perspective you can include, the more well-rounded your product or process will be.

If you are designing a new smart phone, four white male graduates from Stanford will probably create something that works really well and looks pretty sleek, but will it be affordable for people in other parts of the world? Will it meet the needs of desires of the average smart phone user, or will it be focused on what its creators think would be most useful? Market research can be beneficial, but having a diversity of perspectives within the team is a much more solid way to ensure the end result is well-suited to meet the needs of many.


 

Diversity is important, and empirical evidence suggests the tech sphere is lacking in it. Who’s to blame? Should anything be done about it, or should we sit back and hope it works itself out?

The blame lies with the culture of programming, and that culture has been built up over the last half century by a variety of sources of influence, from TV shows to software companies to the individuals involved. As several of the readings point out, women and some ethnic minorities are steered away from the software industry because their perception of the industry does not align with what they think of themselves. That is an entirely natural reaction. It is difficult to go against one’s grain and sit down in a room full of people who look and act differently from you.

We need a cultural shift, which is not an easy thing to orchestrate. When the orchestra needs directing, it looks to the man at the front waving his hands around passionately. Who are the maestros of the tech industry, the conductors of change?

Leaders at the leading companies are certainly big voices, and some of them have spoken out about the debilitating lack of diversity in their industry. They have acted upon that notion as well, committing hundreds of millions of dollars to efforts aimed at increasing diversity.

While those leaders have a great potential for change, a cultural shift really moves at its roots, by way of each individual participant in the culture. That’s you and me and every other young mind working to develop beautiful software. By changing the way we act, to think with a broader perspective and speak with a more open mind, we invite people from more diverse backgrounds to join us.

Standard

live blogging burnout

I spent about two hours reading all of the links posted and then trying to write a blog post in response to the third question, about burnout. After two hours of typing out paragraphs and then deleting them because they sounded senseless and boring upon a second read-through, I decided that I had experienced burnout like the authors were talking about firsthand, and that was a good enough interaction with the readings to satisfy my desire to complete the blog post assignment. 500 words or not, I gained perspective.


 

To deal with my minor burnout on the blog post, I did what the authors did – I took a break: in this case, a short night’s sleep. So what caused the burnout last night?

I was pretty exhausted from playing in an ultimate tournament in Alabama over the weekend, and I didn’t get started on the post until 11:30 p.m. last night. But while fatigue definitely played a factor, I believe if I’d been inspired by the readings, I could have written an inspiring blog post. Unfortunately, reading about tech workers getting tired of their jobs just made me more tired.

Can I use this experience to learn and avoid burnout when I enter the workforce? Well, what did I learn? Did I learn more from the readings or my short firsthand experience?

I think it takes some hindsight to learn from a burnout experience. As several of the authors prescribe, you need a nice, long break when you are suffering from burnout, the purpose of which is twofold. First, to help your mind and body rejuvenate, as they were likely exhausted before you took time off. Second, with the hope that sometime during the break, you will start to understand what exactly led to your burnout – what left you feeling dissatisfied.

Andrew Dumont offers some advice related to the first in the form of a list of tips, including morning exercise to get the body in gear, a nightly walk to clear the head, and paring down the number of decisions one has to make on a daily basis. For the most part, his suggestions sound great to me, and I think they would improve the quality of life for anyone, regardless of how burned out they feel.

He provides an example of “limiting decisions,” however, that didn’t immediately sit right with me: President Obama’s decision to only own blue and gray suits, so as to minimize the energy he has to expend to pick an outfit every morning. Mark Zuckerberg does something similar – he only wears gray t-shirts.

“I really want to clear my life so that I have to make as few decisions as possible about anything except how to best serve [the Facebook] community.”

While perhaps the president of the United States and the visionary of Facebook are extreme examples, as they have more decisions to make than the average tech worker, their decisions to cut wardrobe variety out seem to me a diminishment of expression.

President Obama also references food choices as an energy-sucker. If everyone in the tech industry was given a set of outfits that were perfectly adequate and a big pill every morning that contained all the nutrients one could need, all those people would be spared the energy from making those decisions. But our world would also start to look like the dystopian one Aldous Huxley describes in A Brave, New World.

The point I am trying to make is that if you are making so many decisions that you are too exhausted to pick what to wear or decide between spaghetti and fried chicken, you are doing too much. You need to moderate. You’re on the track to burnout. Just look at the before and after pictures of President Obama’s presidency; or any president for that matter. It’s not a sustainable lifestyle being the president of the United States.

Well, I made it – a full blog post! From it, I gained some thoughts about how to avoid burnout: be passionate about your work, but not too passionate. Live in moderation. Find a balance. Live sustainably.

 

 

Standard

How much does the manifesto reflect my thoughts? A fair amount. How much do I identify with the portrait of C.S. Irish? Somewhat. In writing the two documents, Tim, Heather, and I tried to pick out the most recognizable stereotypes for Notre Dame Computer Science students. As with most stereotypes, they’re mostly true, but not all of them apply to every member, including me.

Is that a bad thing? Maybe it depends on the stereotype. If it’s a mean stereotype, it can make people feel bad. But if it’s a nice stereotype, why should the stereotyped individual have a problem with it? One might argue that there’s no such thing as a nice stereotype, that stereotypes have a negative connotation. But we can certainly make positive sweeping statements about a group – call it a generalization.

The danger with generalizations is that they lead to false assumptions about the minorities in the group. Whether it’s good or bad, the minorities are being misrepresented.

So is there any harm in someone thinking you’re really nice just because you go to Notre Dame? Or someone thinking you’re good at basketball because you’re really tall? It could lead to a misunderstanding, but it’s like being given a 95 on an exam when you really only earned an 85 – it doesn’t hurt, and you could pretty easily correct it if you wanted to. Or you can let it stand.

It does not seem like there’s anything wrong with stereotyping unless the stereotype is malicious (all malicious things are bad anyway), but is there anything good to make them worthwhile?

Is it helpful to be able to deductively reason that a particular Notre Dame graduate will be a good employee because Notre Dame graduates generally are? Seems helpful. Stereotypes can help simplify the complex ideas, traditions, and culture of a group to a level that is understandable by an outsider.

So if the CEO of a company knows she wants good employees who are good people, she knows she can hire people from Notre Dame, despite not necessarily knowing why they’re good people. The CEO doesn’t need to understand the myriad factors that influence a Notre Dame student’s life and makes him or her such a wonderful person, so her job is easier.

What distinguishes a good stereotype from a bad one? You could argue that stereotypes that paint a group of people in a bad light are bad. It’s probably worth avoiding them, but they do help accomplish the job of simplification. The real nasty stereotypes are the ones that aren’t true. They may have once been true, but something changed and now an entire group is being misrepresented to the rest of the world.


The presence of a manifesto can be helpful or hurtful – it depends if the entire group is on board with it, or if just one member created it. One person could write up a manifesto and proclaim that it describes the essence of the group, but if the author is part of the group’s minority, you get the problem of misrepresentation again.

If a group mostly agrees on a manifesto, on the other hand, it can be a unifying force – a source of culture. I couldn’t really tell you where the one Heather, Tim, and I wrote falls since we’re just 3.

Standard

interview for culture

A great cultural fit is vitally important for a new hire. I believe a great cultural fit who is an average-level developer will far outperform a great developer who is only an okay cultural fit at a company.

Why – My Experience

First of all, it is important that a company has a culture. I’m not too hung up on what every company chooses, but like Alexander Hamilton, I believe everyone should stand for something. Not only does it provide a basis for important decisions, it helps to attract like-minded people. There is danger in some types of homogeneity (stagnation, close-mindedness, backwards progress), but not when it comes to core beliefs – in fact, a group is cohesive and productive when its members share similar belief systems, this is how tribes are formed.

Sam Phippen uses the word tribe in a negative light. He claims it is human nature to form tribes, but that tribes beget ineffective hiring practices – that CS people just hire more CS people, and CS people aren’t always the right people for the job.

If hirers are simply looking for people with the same training that they have, that would be problematic. But if the main goal of an interview is to discern the cultural fit, technical background will never be the only factor in a decision, so you don’t have to worry about a company becoming a homogenous group of algorithm prophets. Because surely a Notre Dame Computer Science graduate could hold the same core values as a Ruby bootcamp graduate – say, they both believe that interaction with the customer is essential to a well-run business. Then you start to develop a tribe.

What It Means For The Interview

The technical skills of the interviewee are going to be tough to assess – the readings made that exceedingly clear. But why worry about it when you have the most “successful” hirers like Google publishing their methodology. So give them four interviews, a pair programming assignment, an inter-departmental interviewer, and a fun lunch break – trust that you can get a sense for their technical prowess.

Focus on figuring out who the candidate is. Unfortunately, you are probably going to have a more subjective rubric for this section. Fortunately, the questions you want to ask are easier to ask and easier to answer.

Q: Do you like working in teams?

A: Yeah, I played Ultimate Frisbee in college, so building something as a team comes naturally to me.

Q: Why do you like programming?

A: I view it as a form of expression. And I believe I can create a positive impact for others.

Q: What do you do in your free time?

A: I’m really passionate about the outdoors and relationships, so I like to go on adventures with my friends.

These questions do not have right answers, but they give the interviewer a sense for what the interviewee is like. What’s important to them, what motivates them? The interviewee ought to feel comfortable answering these questions, and in many cases, it should become clear pretty quickly if the candidate would be a good cultural fit.

What It Means For the Career and Company

If every new hire’s core values ring true with the company’s, you have a tribe. Lone wolves survive, tribes thrive. Twenty-five people working toward a common goal will accomplish a lot more than just one person working toward a goal.

When a group of people is motivated by more than just a bi-monthly paycheck, you get buy-in. Much like I’d take a great cultural fit/medium developer over a great developer/okay cultural fit, I’d take 25 people who drew inspiration from a set of core beliefs over 125 people who signed up to have a job.

I interacted with a half-dozen recruiters while searching for a job. Some were satisfactory, some were peppy, some were exceptionally competent at their jobs, but only one was excited to talk to me every time I sent her an email or called her, and that’s where I will be working next year. Did I choose that company because of the recruiter? Partially, some indeterminate amount. Not because she was an awesome person, but because only someone who really buys into her company would be so passionate about her job. And everyone I’ve met there is the same way.

They conducted a two-way behavioral interview – they wanted to get to know me, and they wanted me to get to know them. That, in itself, was important, but I also felt like I fit in with them, and that’s a pretty great feeling to have going into my first job.

Standard

what is a hacker?

The nature of a hacker has changed dramatically over the course of computing’s relatively short history. This is inevitable – as the circumstances change, so do the inhabitants, and the circumstances have evolved greatly, from advancing technology to software’s growing prevalence in the world today. But the central ethos has remained consistent.

The readings mention phone phreaks as some of the original hackers: benignly exploring the features and intricacies of a system, at times exposing flaws and loopholes. That is the original hacker ethos, and from it, many related but distinct ethea branched, some similarly benign, but others malicious.

The hacker culture does not lend itself to self-promotion. Hackers, in accordance with the personality outlined in “A Portrait of J. Random Hacker,” do not seek validation from the mainstream (though they do often seek it from each other). This is evidenced by groups like WikiLeaks and Anonymous, who pride themselves on anonymity. While these groups, hacktivists perhaps, are not the phone phreaks, they do possess the same selfless quality.

The result is that many people today understand hacker culture only (or at least initially) as portrayed by the media. TV shows and movies that celebrate a rebellious, quirky youth with seemingly supernatural power at a terminal paint the hacker to be a hero. While there are still some who discover the “traditional hacker lifestyle” on their own, there is a growing number of interested people who are drawn to the idea of computational superpowers.

That crowd may not be as counter-cultured or low-key as the traditional hackers – they may be more focused on earning money or saving the world – but does that preclude them from boasting the hacker title?  Brett Scott seems to think so, dismissing this crowd as yuppies. Scott says they’re ruining the hacker culture with gentrification.

The average hacker profile is undoubtedly changing, the compass is shifting, but that isn’t a bad thing. As I said earlier, the circumstances are changing, so change in the people is natural. Paul Graham’s notion of a hacker is modern and accepting. He compares hackers to makers like artists and writers.

Hearkening back to my previous post, the hacker Graham describes is the software developer I described. Not the computer scientist and not the software engineer, but the curious, artistic individual who seeks to use knowledge of coding to create something. That is a step away from the phone phreaks, but I believe the ethos is the same: experiment with the capabilities and limits of a system, do something cool. So while the first hackers toyed with phone lines, today’s hackers cobble together intuitive, interactive mobile applications and code libraries. The material has changed but the central ethos has remained.


Paul Graham’s comparison to painters is romantic and appealing to someone like me, who is frightened by the inhuman component of programming but attracted nonetheless to its potential for expression.

Beauty. Empathy.

Two words with which I strongly identify.

To me, creating something beautiful has inherent value – it need not contribute any further to be deemed useful to society. That is, no doubt, an artistic belief that might make certain computing aficionados scoff, but to them I would suggest reading Graham’s comparison of hacking and painting. It is compelling in so many ways.

Creating a cool application does not start with a master plan drawn out in points and plots. It is an iterative process that starts with a rough sketch and proceeds, often unmethodically, through a variety of stages, never really reaching completion. That sounds an awful lot like the development of a painting or a story.

For software that is aimed at anything other than pure self-expression, which is likely the majority, empathy is the most important quality a developer can possess. If you intend for other people to use your software, there is nothing more relevant than how those people want to use it.

The developers behind TurboTax did a fantastic job empathizing with tax payers. Many developers, when asked to develop a tax-paying application on the internet, would look up the government tax forms and create electronic form versions for users to fill out. While that may save an ounce of trouble, it is as unpleasant for users as filling out the forms with a pen.

TurboTax, on the other hand, has a beautiful interface that asks the user simple questions – the type of question one feels accomplished after answering, like your name or birthday or marital status. “Ha, this is like a quiz I know all of the answers to” is a great sentiment to have while filling out taxes. The developers clearly spent some time thinking about what users would like and used that as the driving principle for their software.

I love developing with the principles of beauty and empathy floating around in the back of my head. So I am a Paul Graham hacker. I am not the portrait of J. Random Hacker – I am like him in some ways, but J. Random Hacker is just one hacker. There are many of us, and we’re all alike.

Standard

computer science: art, engineering, science?

Computer Science is a science. Interested parties methodically explore novel concepts in an effort to further the frontier of accepted practice.

Software development is an art. Developers use tools to create something that can be enjoyed by others, whether for function or for purely aesthetic reasons.

Software engineering is a third field dealing with software that requires a certain level of rigor, robustness, and reliability.

So when you sit down at your computer to type out some code, you could be doing any of the three, perhaps simultaneously. To me, there are clear distinctions between the three that I will elaborate upon and that are often blurred or misrepresented by others.


Computer Science lives primarily in the halls of universities. Student of computer science begin by learning the basics of programming – pointers, basic structures, memory allocation. They build up to knowledge of data structures, operating systems, algorithms, and more, but ideally, they fundamentally understand how their program will run, all the way down to the transistor – the (current) basic building block of a computer.

This breadth of understanding enables them to pick a spot and dive deep – to investigate a novel concept. This could be developing a new algorithm for a known problem or using known algorithms for new applications. There are metrics (e.g. temporal and spatial complexity) to evaluate findings like in other fields of science, and peer review is an important aspect of successful research.


Creating an app on your mobile device to count how many hops you make on a pogo stick is not science – it is software development, an artistic expression. Millions of lines of code are probably written everyday* by people seeking to create a new app, a new website, or a new way to animate a cartoon character.

Developers may roughly follow a scientific method, hypothesizing about what might work, testing out a couple of different solutions, and then implementing one, but there is a notion of software development lingering in the back of every software developer’s mind: everything can be rewritten. If your new app crashes, find the bug, fix it, and release an update. If your website does not lay out properly, adjust your formatting and hit save – next time someone loads the page, it will be different. As Ian Bogost notes in his piece in The Atlantic, software developers are often treated to the ease of rapid repair.

This is a major distinction from engineering. An engineer of a bridge cannot try out one support scheme and easily replace it if the bridge cracks – safety and a lot of money are at stake. An engineer designing the brakes of a car cannot go with “the best he’s got,” he is responsible for meeting a certain standard set by someone who figured out how good car brakes should be. It’s okay if your pogo stick hop-counting app crashes after you hit 100, it’s frustrating at worst.

*Didn’t look this number up but I can’t imagine it’s more than an order or two of magnitude off.

But there is software that is undeniably engineered. That bridge designed in part by civil engineers may also contain a monitoring system that ensures there is never too much weight or that detects small fractures in the material. The car brakes need to deal with frictional heat, but they are likely also controlled in part by the car’s central computer. Whoever writes the software for the bridge and the car is definitely an engineer. He is subject to the same safety standards as the other engineers.

Bogost had two points suggesting that software development is growing increasingly informal. He implied that its growing informality discounted it from being an engineering discipline. While his first point about ease of rapid repair was sound (if limited to certain types of development), I disagree with his second point – that software is becoming more isolated.

Marc Andreessen wrote a short paper arguing that software is eating the world, and the evidence is compelling. While there are certain applications of software development that stand essentially on their own, it is naïve to say that software development as a whole is becoming isolated when, in fact, it is becoming more and more integrated into all aspects of our lives.


So writing code doesn’t necessarily make you an engineer, a scientist, or an artist, but it could make you any or all of them.

 

 

 

 

Standard