If you were to have infinite time, you would be idiotic to _not_ spend a bunch of time fully internalizing the entire stack; or at least, trying to comprehend as much as possible given your non-infinite memory. Hell, whilst you're at it, throw in a rounded education in the humanities and sprinkle a ton of socialization in there too.
But you don't have infinite time. So instead, you try to find the balance that allows you to perform the task you wish to perform.
And at that point it's a completely personal decision. Perhaps you spend more on soft skills because you have managerial aspirations. Perhaps you drill down into bit banging simply because it's of intrinsic interest to you.
There probably is some basic level that's useful for every programmer to have, sure, but what that even means is dependent on sector. I don't know a thing* about Windows because it's both highly uninteresting to me, but also because it'd be a waste of time for me to bother with it. (Perhaps the two are linked!)
* okay, I know a decent amount compared to the man on the Clapham omnibus, but not compared to a Windows developer
I like to roam around the projects I use the most on Github and see if there's a bug I can work on, or someway to help out, when I have the time. It took me too long to realize that I could help open source projects out without being the chief architect at Microsoft, or have a well established coding blog, etc...
I love the infinite time explanation. It's a great way to explain how/why tradeoffs are made. I haven't been able to put it so succinctly, so, uhhh... that's mine now I'm stealing it. Please credit me in your original post.
Writing original code is laborious, extremely time consuming and in many scenarios unjustified (for majority of coders). It is not an excuse for ignorance but originality in coding isn't particularly productive (IMHO).
I do take time to understand the code I am using, but for most part spend finding the code (and libraries) I want to use even in the simplest cases. I've developed a habit to give credit in my code ... in fact I have an expansion ';c' on my mac for giving credit! :-)
I see a lot of very experienced programmers spend time writing and debugging even simple lines of code which are perhaps completely obvious to them. I think it is just that way coding is. It is perhaps very difficult to get it "First Time Right".
I guess its a balance between getting things to work vs. self-gratification.
Your comment mistakes the nature of your work. As a software developer you don’t get paid to provide code. You are paid to provide enhancements to business products. Changes to a product should be tested before they are shipped to the customer regardless of who makes them.
Avoiding writing original code is primarily an excuse to limit personal liability through diverting blame. Organizations that care about product quality would blame that developer/team for poor product quality regardless of who wrote the code.
One advantage of using open source libraries is that they are often more battle tested and cover edge cases you either didn't consider or have not encountered yet. The trade off is less time spent fixing bugs, but more time spent on maintenance (package upgrades for security vulnerabilities/features/bug fixes, updating your code for library API changes).
I have heard this excuse many times. Again, you are not paid to ship code. You are paid to ship product features.
Those open source libraries might be tested to exhaustion, but not in your application. Your application still needs to be tested either way. The presence of excess code does not eliminate defects but rather moves them to locations less obvious which means they are harder to find by the end user and just as costly to fix. That is the very nature of debt.
Regardless of who owns the code you or your team own the product.
Like you said, they aren't paid to ship code, they are paid to ship features. And those shipments usually have hard, arbitrary ship dates that does not reflect the time needed to code and test them. So in order to do your job of shipping features, at times you need to use pre-written and pre-tested code, because if not you'll fail to ship.
No matter how you look at it, someone wrote the code you use. Since you are heavily dependent on the fact that people write code for you and others to use freely, the end result is evidently more than simple self-gratification. The end result is things getting done.
I use TextExpander. I wanted to do some custom stuff and discovered this recently [federico-terzi/espanso: Cross-platform Text Expander written in Rust](https://github.com/federico-terzi/espanso)
I tried espanso a bit and it seemed to work pretty well and is customizable.
Keyboard (System Preferences -> Keyboard or Spotlight -> Keyboard) -> Text, add whatever expansions you want, though this wouldn't be able to automagically figure out the actual attribution; you'd have to type that in after whatever template you set up.
I used to think the same with music and programming but I stopped caring at some point.
In my spare time I find joy in writing things from scratch but I see it more as challenging myself rather than "not cheating"
If I write a compiler purely to challenge myself, I'm much more confident in using an existing one in the future. However with a cheating/not cheating mindset I'd be reinventing the wheel every time.
I agree with the gist of the quote ("you're not a real x unless you do y" is silly). But it lacks nuance. From what I got to see, a lot of programmers never bother what's happening underneath their code. For them, Spring, node.js etc. will always be magical black boxes. Here's another analogy for you: Would you go to a doctor that knows how to remove an appendix, but doesn't know how the circulatory system works?
> Would you go to a doctor that knows how to remove an appendix, but doesn't know how the circulatory system works?
Would you go to a doctor who understands how to do a heart transplant, but doesn't understand the Navier-Stokes equations underlying how viscous fluids like blood behave? Would you trust your car to a person who doesn't understand friction at the level of interatomic forces? Would you seriously trust your life to an airline pilot who couldn't single-handedly design and build a plane at least as capable as the Ford Trimotor?
People have always been specialized. Always always always. Even back before "specialization of labor" was officially invented, the people who lived in the Kalahari didn't know about hunting seals and people who lived in Polynesia couldn't hunt a buffalo or a bison to save their lives. There just isn't enough time, interest, or opportunity in one life to learn everything top-to-bottom. We're slowly improving the amount of opportunity, and you can fake an interest for a while, but increasing the amount of time is still a miserably slow process.
> Would you go to a doctor that knows how to remove an appendix, but doesn't know how the circulatory system works?
The ability of a surgeon to successfully remove an appendix necessarily involves a fair amount of knowledge of the circulatory system, so the analogy is flawed. In my experience some people get by just fine only knowing how to address a problem through some particular framework. The point of abstraction in the end is to release people from the burden of considering the details.
I think that this is a better analogy: would you go to a massage therapist who knows how to release tension in your back by manipulating your tissue, but doesn't know how neurons and axons work? Perhaps greatness can be achieved by a a therapist who knows the ins and outs of every aspect of your body, but as a patient/customer I am really only interested in the skills of the therapist insofar that they manifest in the ability to relax my back and shoulders. And really, a lot of development problems are like that. People perform well at their development jobs without a firm understanding of the mechanisms and inventions that enable them to perform it in the first place. It's by design.
The real problems arise when there is something you really ought to know. For example, if I don't know how memory is allocated by the black box framework I use and memory use suddenly blows up through my interactions with it, the only thing I can fall back on is a lower level understanding. But usually this is fine, and it is someone else's lower level understanding that you fall back on. If you're working high up in a web front-end perhaps you have to consult someone who contributed to a web framework you're using. Maybe you have the wrong approach in a way that leads to instantiating too many objects. On the very low end of software development you may end up having to consult someone who laid the board out and can reason about the operation of the application in terms of the electronics involved. Maybe these two buses shouldn't be used at the same time because of a reflection problem.
> would you go to a massage therapist who knows how to release tension in your back by manipulating your tissue, but doesn't know how neurons and axons work?
You should know one abstraction layer deeper. In this case one abstraction layer would be that the therapist knows what muscles there are in a back and where they are. You can administer massages without knowing it just by memorizing patterns, but you would get a lot better and can handle more cases if you know about the muscles.
Similarly you can become productive using a framework without knowing anything about how the framework works underneath just by memorizing patterns which works, but you would get a lot more productive and can handle more cases if you had a decent understanding how the framework is implemented.
More generally, I'd say that it's a matter of what improves your ability to perform the work you want to perform weighed against the possibly diminishing returns of investing time and energy in understanding things in more and more detail. Depending on the circumstances of your work, this may favor digging into more than a couple of abstraction layers (for example, an embedded developer that knows both software, digital circuit design and electricity can be immensely useful) or it may favor unquestioningly accepting the layer of abstraction you target as the whole truth (for example if I'm working on expressing some performance insensitive business glue logic for some proprietary platform, in which case I don't even have a choice).
My mechanic likely doesn't understand the full complexity of the fluid dynamics involved in braking systems, but he's perfectly capable of changing pads, rotors and tires when they go bad.
The analogy is with mechanics who aren't perfectly capable of changing pads etc, because they keep trying to redesign the car to make the latest pads etc fit, even though this is a bad idea and doesn't work.
One problem with this analogy is that a surgeon does not go into the operating room solo to remove your appendix. You would first be seen by an anesthesiologist, someone with a nuanced understanding of biology and pharmacology, just to get you prepped for the operation (the surgeon likely has only a surface level understanding of this, and doesn't need it to perform her job). Once in the OR, you would also be attended to by Nurses to monitor your vitals and administer any medications and perform any life-saving interventions should something go awry. There also may be a Surgical Technician involved in preparing tools needed to perform the procedure. Perhaps an ultrasound technician is present to make sure the surgeon's incision hits the mark, etc, etc. The point is, there is still a ton of abstraction happening so that all the surgeon has to focus on is performing the procedure. This is very similar to how Software Development works on large projects, except we often have multiple surgeons operating on different parts of the same body at the same time!
I think a better analogy for software would be construction. Multiple people of different specializations and skill-levels working together to produce a single artifact. As a framer, you don't need to know how to lay the foundation, you just need to know how to build on top of it. Furthermore, you don't need to know the chemical process of concrete to pour the foundation, you just follow the recipe that's been proven. You don't need to know the math required to determine the integrity of a structural load in order to frame a roof, you just follow a plan. Some projects may require engineers and specialists who do have a deep understanding of such things, but only to create the blueprints.
So what? You don’t understand what’s going on inside your CPU because at this point they’re too complex for anyone to fully know, and probably your understanding of the OS is on an as-needed basis. Why hold others to a different standard.
Correct. A surgeon or gastrointerologist might only have an understanding of the circulatory system insofar as it helps their surgeries and gastrointestinal work.
So yeah, I'd probably go to a doctor who's really good at removing appendixes over the guy with comprehensive medical knowledge.
Doctors all acquire broad medical knowledge as part of their training. They're not allowed to specialise until they've shown adequate competence in basic general medicine.
An eye specialist who is absolutely clueless about heart or kidney function would be unusual, and not particularly useful. Not only do most drugs have multi-system side-effects, but many conditions affect multiple systems.
So Dr Appendix Removal will almost certainly do all kinds of surgery.
Consultants tend to have a more limited focus, but you don't get to be a consultant without doing a lot of other medicine first. Usually they either work as supervisors/managers, or they're called on to consult/advise and maybe operate on more challenging cases.
So let's rephrase. Do you trust an AI controlled robot specialized in appendix removal (with a very good track record) but zero knowledge of other systems or procedures?
Surgeons spend a lot of time studying. Its not like some employers in tech that expect the dev to operate on different kind of patients all day long with 100% productivity and no prior training except what they already knew from school. So information is constantly being uploaded. But developers need to rely on abstractions because the field is constantly changing. After 10 years a framework would be obsolete or undergone major changes. Meanwhile the human evolution won't change much in 10 years although there will be a lot of new research the anatomy stays the same.
Did you understand what I wrote? Nuance is exactly what I advocate for. The problem is not people not knowing things, but people thinking they don’t ever need to know more and just rely on the underlying magic.
School teaches us that reusing someone elses code is cheating, and using a library would be cheating in a work interview. But using a library is actually standard practice, even when you are paid per hour. Doing anything from scratch will take a lot or research and or trail and error. Even basic stuff we take for granted, like putting text on the screen. eg fonts Unicode graphical driver, etc.
> School teaches us that reusing someone elses code is cheating
No, school doesn't teach this. School just teaches that trivializing an assignment by using others code is cheating, since the point of school is to learn how things work and not learn how to find, evaluate and use libraries.
You could argue that school should better teach library usage, but that is a very hard topic to teach, what would the assignments look like? Would it be a case problem? Like you get a list of current dependencies, an overview of your team members skills, the task you are supposed to solve and then you go out looking for actual libraries and get graded based on how well your choice aligns with the professors?
Whose brainpower is cheaper: the professor generating 30-300 novel, unique programming challenges at an appropriate (and identical) difficultly level per course per semester, or the student who's paying to be educated?
The whole idea of open source is to remove barriers to building on other's code. This isn't cheating. Ignore what school taught you, but make sure you follow the ideals of the license, and try to pass it on if possible. Also, if you import a library, try to actually understand what it does. This is sometimes an unreasonable request (for example libsodium is a gateway to madness/enlightenment) but you should make an effort to understand the code you're importing!
School teaches us the worst possible thing is test failure, so better play it safe and avoid writing original code as much as possible.
As a front-end web developer there are only 2 APIs to learn: DOM and web API (the html5 and browser stuff). Both of those APIs are well documented industry standards. Both change more slowly than the average developer’s career and are almost entirely backwards compatible to their first versions. In short you almost never have to relearn the standards, they always work as designed, and risks of cross-browser failures are largely years in the past.
Yet most front-end developers are scared shitless of writing to the standards directly, because that means writing original code and putting their name on it. Better to play it safe with a giant monster framework and 1000 NPM packages in order to bind some CSS to a piece of dynamic content. The commonality of invented here syndrome suggests that not spending the time to use common libraries is what’s cheating.
However, is this how we get into situations like npm, with everyone using other people's libraries and having so much bloat? Where is the balance if any? I don't have an opinion either way, just putting it out for discussion.
I will write here that you should use libraries. Then next week you will see me complain about an app having two million dependencies... Here's how I find balance:
I try to use "battle tested" code rather then writing my own. eg. other people's libraries. But I try to keep the dependencies down. If a package has 10,000 files I rather just rip out the functionality I need. (So I hate when you can not do that, eg. if every function in the code is dependent on every other function.) Often I find issues and are either too lazy to send a "merge request" or the maintainer refuse to merge it. So in many dependencies I often end up using my own fork.
Dependencies are a two edges sword. It's good that you get security updates. But if you can't review all the changes, security issues can also sneak in. Like if someones npm account get hacked and the hacker sneaks in a remote shell or backdoor. Those can be hard to find if you got millions of dependencies which get tons of updates every day.
So I usually lock down dependencies, and NPM becomes a fancy tool for copy/pasting.
Every npm library has its own copy of its dependencies. This duplication is what causes the huge number of dependencies.
This duplication is necessary because npm is not a curated software distribution designed to work as a cohesive system. Anyone can push software to npm. As a result, there are packages that depend on different versions of the same library and they must have their own individual copies of the dependency in order to work. This isn't the case in a Linux distribution with maintainers: there's one instance of the package that's shared by all other packages.
School presents students with artificial rules and an environment that simply doesn't reflect the real world. They call it academic integrity but it's really just artificial difficulty. In the real world, people work together as a team, they build on top of each other's work, they have free access to reference material... The real world is pretty much the opposite of a school exam or assignment.
In school, people have to prove themselves by doing stuff without any support whatsoever, only to be punished for any and all mistakes. All grades are final: there is no opportunity to review one's mistakes, learn from them and get re-evaluated. Either get it right the first time or risk failing the course. Of course people cheat: the cost of failure is simply too high. Outside school, people use every trick in the book to lower cost and risk; somehow that's not acceptable in school.
If school is currently teaching this, it is seriously failing its mandate. This does not mean that using sort(3) as a solution to a sort assignment should be acceptable.
School, if it is not merely using you as a source of revenue, is trying to teach you to understand something about the technology, as there is more to making good software than copying existing stuff.
Most problems solved with technology are like this, stringing together the right libraries with a thin layer of business logic.
However many startups or research-oriented technology development will require some part of your stack to be reinvented from scratch to solve whatever problem is the premise of the business.
In these situations it becomes clear pretty quickly who is comfortable solving technology problems from first principles, situations that these educational exercises are aiming towards.
Isn't "cheating" already a well solved problem? The only way you can cheat is by using code you didn't write without the owner's permission. If you violate someone's Copyright you're "cheating". Let's take an extreme example. Your product is just a 1:1 copy of an existing product like IntelliJ with a reskin. You're cheating if you haven't negotiated a license with JetBrains but if you do have such a license and sell an identical product then you are not cheating.
There is no shame in using a library or tool to solve a problem that you do not understand. A "real" software engineer knows when it makes sense to outsource the work to an existing tool or write their own.
When you have to interoperate with other programs through an open standard then it may be counterproductive to write the code yourself. I've personally seen way too many hand written CSV parsers/writers that were just plain wrong.
That's the problem I have when making a toy project from scratch in order to learn a technique (be it databases, blockchain, or deep learning). I'm always wondering if I'm allowed to use such algorithm or clone such github repo. Or if copy/pasting an auxiliary function is cheating or not.
To avoid such moral dilemmas I try to come up with projects that solve a new problem (which also comes with its own set of challenges). And then use all the means at my disposal.
As a bonus it also keeps me motivated, since it has a shot at being a profitable business idea.
The whole crux is that it's 'cheating' if you don't actually understand the tools. So long as you don't run into any problems where not understanding an algorithm stops your entire development process, who really cares?
Looping isn't "cheating" if you know what you're doing. But if you blindly do that from the beginning without ever learning how rhythm really works, then you're going to be lost when things get more complicated.
And the same is true of the dev who copies tons of code from stack overflow without really understanding it. When issues like thread safety of the app come up, that dev is going to be lost.
Experienced musicians choose loops and samples understanding full well the trade-offs of quantization and timbre. Experienced devs bring in libraries understanding the trade-offs. Answering the question "how well do I need to know this library before introducing it in production?" is a really, really hard problem.
In practice, I see junior devs blindly copying code that subtly won't work more than I see them writing their own DI system because of not-invented-here syndrome. But, admittedly, the latter tends to be more destructive.
Lazy learning is a lot less efficient, in order to get good you need a ton of deliberate practice on the basics, practice which wont happen as long as you are "cheating".
Coming from a gamedev perspective, I've personally decided to draw the line at the following for my projects:
* Using your platform's graphics/sound/input/etc. APIs, or wrappers like SDL, is not cheating
* Using a third-party engine is cheating
Other programmers may draw the line elsewhere and that's fine. I think for me it's a question of, using a library to achieve <hard task> is fine, but using a framework puts on me the burden of understanding and then extending someone else's half-completed program. And when that someone else made a decision that I wouldn't have made that fucks with my workflow, it also puts on me the burden of, is it okay to go in and change it or implement a low-level hackaround? Or is that cheating, and should I do everything in the framework's own terms?
I don’t think players care about the architectural purity of your product.
Some baseline of performance is sufficient as long as you can deliver the proper experience, which is really the only thing they’re seeking in your game. If that same experience can be delivered MUCH more quickly and easier why would someone deny themselves and their users those benefits?
I understand there is nuance in this decision but it feels dangerous as a hard and fast rule. I think especially so when team size > 1-2 and common understanding becomes important.
> I don’t think players care about the architectural purity of your product.
But we can kinda see how the "quick and easy" approach with modern engines is leading to lots and lots of sloppy games with leaky seams.
I always appreciate it when a gamedev writes their own engine, and IME these tend to come out as the more polished games. Doing your own engine would also enable open sourcing it, which is something gamers can be quite grateful for 10-20 years down the road.
Well, they kinda do, sometimes at least. This is a slightly weird case, but Unity has gotten a bad rap recently for being host to a million shovelware and asset flip games due to its low barrier to entry, so much so that some people avoid things made in Unity entirely.
How many people though? If you're going to let this sort of rumor inform technical decisions then you need to quantify it. There are after all quite a lot of other odd but unpopular reasons people won't buy a game.
At first I thought the “libraries, not frameworks” mantra* was silly for drawing an arbitrary line in the sand. Over time it’s lined up well with my experiences though.
No, using an already existing engine is called delivering a product in a predictable timeframe. Valid reasons to not use a specific engine for a specific project are plentiful, even if all engines get ruled out, but "cheating" is nonsense.
In my Web programming class in college, our final project was to make use of everything we learned to create a website. In class we didn't get past chapter 9. I however, read and completed all chapters. I made use of Ajax and a webserver which was only taught in chapter 12.
Fast forward two years, the teacher is trying to run the project to showcase an example project her students can do. She can't run it. She remembered it being a great project, but now for some reason some parts won't work.
I get a call to come help them. I dust up the old project, edit it, and get it running in front of the whole class. "What was the issue?" She asks.
"I used a webserver, now I hardcoded all the data since it's just a demo"
"Oh, so you guys cheated?"
Long story short, I was threatened to get my grade revoked, but because I have made things right I was forgiven.
I don't get it (probably neither do you). Was using the webserber what she considered cheating? If so, why? You said it was covered in Chapter 12. If it was part of the course, why would it be considered cheating?
Ok, so this is actually a topic which I've been thinking about quite a lot over the years.
Back in college there was naturally a strong emphasis on writing your own code, after all, you're supposed to be learning. In the end, we had to roll out everything on our own - rarely, if ever, did we get to use libraries or frameworks.
They were extremely strict on plagiarism too, so you were constantly afraid of your code getting flagged (for whatever reason).
All of this really created a mindset that using libraries = cheating. Same with looking up help on the web.
It took me years to get rid of that mindset, simply because it didn't feel right.
If so, you weren't writing your own code. You were just writing a description of the code in some human oriented programming language, and the compiler wrote the actual code for you.
Also, the compiler almost certainly linked in a runtime library - more code you didn't write.
But let's skip the compiler and its runtime, and even any assembler, and just write raw binary machine code.
Now you're writing your own code, right?
Well... On any modern processor, your so-called "machine code" is just a high level language that the processor compiles into its own internal operations - which don't look anything like your machine code and don't even get executed in the same order. So you're not writing the code the machine runs.
You could avoid this issue by writing code for an older processor like the 6502, where your code bits go straight into the logic circuitry as is.
In fact, that would be a darn good idea - you would learn a lot doing it!
Funnily enough my computer engineering course in college was pretty much this comment.
Learn to code in C & ASM. Learn to hand-compile to ASM, and hand-assemble to machine instructions for the CPU we were using.
Now learn VHDL.
Implement a processor on an FPGA.
Write a (tiny, minimal, not fully standards compliant) C compiler for it.
You're back to step 1.
Of course we didn't implement the synthesizer, or place & rout logic, or bitstream format, or the FPGA itself. You have to stop somewhere.
Here is a true story. When I first started programming, we were told that professional companies developed software as follows. (Of course we were a scrappy timesharing startup, so we took shortcuts, but this is what we were told the pros did.)
• A systems analyst took business requirements and created a specification.
• A programmer took the specification and used it to draw flowcharts.
• A coder took the flowcharts and converted them into code in a particular programming language, hand-written on coding forms.
• A keypunch operator took the coding forms and punched the code onto cards.
Pop quiz! Who actually wrote the code: the programmer or the coder?
You may be closer to the mark than you realize. A good keypunch operator wouldn't just type in your code, they would also do some basic sanity checks on it, and either fix some of your bugs as they punch, or hand you back your coding form with some notes marked on it.
After all, you didn't want to have to wait many hours or a full day to get back your card deck and printout to find what you did wrong.
When you found a keypunch operator like that, you would seek them out every time, maybe even bring them little gifts.
Of course now we are all our own keypunch operators!
To me its: If you do a good job with simple tools you'll be given boring work. If you work in cool technology you get to work with cool technology even if you're producing junk.
I rather interpret it as "If you do a good job with simple tools you'll be given opportunities." (I'm interpreting the parent post to be implying that the simpler, solid hire is the one who is no longer there because they were able to leave for a better place). Also, as cautionary tale for managers: "If you give folks who write solid code boring work, they will find something that suits them better."
> Never hire programmers who demonstrated that they don't care about the boring stuff.
This is why I feel like I'm a bad programmer and no longer look for work in that capacity. I enjoy the edge cases and challenging problems, but I am almost incapable of creating CRUD apis and user interfaces at this point. I'm amazing in a hackathon, but contribute poorly to regular sprint style work. I can pick up and integrate new technology extremely quickly and can refactor a spaghettified mess of code into something maintainable, but I am completely miserable doing maintenance level work after things are smoothed out.
Yeah, programming is a big field though, so always be on the lookout via your network, linkedin, and indeed. I left my last job because it was going to require thousands of hours of combing through hardware specs to verify them. I liked day to day testing and QA but no way was I going to do that. I just found a new job that lets me continue on with the type of programming I enjoy.
> I am completely miserable doing maintenance level work
Sounds like something you might want to work on even if you don't get a job in programming. It seems unlikely you'll find any line of work where everything you do every day is new and exciting.
I'm not sure I'd agree. Sometimes it really pays to play to your strengths. I've had jobs that were odd-work and putting out fires for a few years now and I've never been happier. I'm good at that
Programming-wise it isn't as hard to find as you might think. Maybe not working FAAMG or similar, but it's quite doable. If you don't compromise on the desire to work only on interesting* stuff when a big paycheck comes along, you'll eventually end up in a position where you only work on interesting stuff. The gamut of work is wide.
To be honest such work isn't even that rare. Most people would rather make $200k and complain about how boring their job is than have an interesting one. I think for many people "interesting work" is often the same as "stressful work" and so the interesting positions are not as easily filled.
[*] Interesting as defined by me, and probably the parent poster as well. Obviously this is subjective.
So far I've been able to find positions that work within these constraints. At this point I'm more comfortable working with my ADHD than fighting against it (I tried that for years too). For the last seven years I've helped companies move into the cloud. I'm always working with new code bases and finding new challenges to adoption. I'm sure this will eventually become routine as well and I'll have to find some other way to stay engaged. I'll probably be one of those people who completely change careers at some point.
I think part of the issue is in knowing how you would go about learning and implementing a thing. There is a lot of code that I don't need to bother writing because someone else has done it, but I could write it if I needed to, or could learn the algorithms and implement it. A programmer who has a breadth of knowledge and general understanding of how they would go about writing Node.Js will do a lot of things in Node.js a lot better, especially optimizing. I could figure out how to write a lot of software. And when I see something, I can usually run through which language, architecture, etc. to make it. That's useful.
That's a really poor analogy. This guy is making equivelant tools to tools that already exist. When someone chooses not to use a framework, they don't rewrite the framework, they write something more specific to their needs.
I thought it was about coding. I remember doing some paging data entry screens with FoxPRO (generating screen form code then adding in additional code to handle the paging), and letting another user know how to do it on a forum, and I got a "That's cheating" response. heh.
Calculator-in-the-pocket problem. Why learn if we can externalize memory, etc.
Of course application becomes the concern, but can we really be assured that someone who can perform the calculations mentally also knows when best to use them?
Interesting. I've been heard that there should be no 'black magic' since I just tried to do something in this field. But yes, everytime I cheat more or less, though. So what's the standard here?
I think the sweet spot is knowing enough to know how something works, in essence, if not how to do it necessarily yourself without a whole bunch of bootstrapping.
I don't think that cheating exists, unless you are prosecuted for it. For example, you don't fail a final for cheating, you fail a final for being caught doing something that someone paid by the Uni thinks is cheating. So unless you are punished by an active authority, you have not cheated. And thus, you can't cheat at software unless you are censured for it!
If you were to have infinite time, you would be idiotic to _not_ spend a bunch of time fully internalizing the entire stack; or at least, trying to comprehend as much as possible given your non-infinite memory. Hell, whilst you're at it, throw in a rounded education in the humanities and sprinkle a ton of socialization in there too.
But you don't have infinite time. So instead, you try to find the balance that allows you to perform the task you wish to perform.
And at that point it's a completely personal decision. Perhaps you spend more on soft skills because you have managerial aspirations. Perhaps you drill down into bit banging simply because it's of intrinsic interest to you.
There probably is some basic level that's useful for every programmer to have, sure, but what that even means is dependent on sector. I don't know a thing* about Windows because it's both highly uninteresting to me, but also because it'd be a waste of time for me to bother with it. (Perhaps the two are linked!)
* okay, I know a decent amount compared to the man on the Clapham omnibus, but not compared to a Windows developer