IT Security History and Architecture Part 5 of 6

Tuesday, August 24, 2010

Dr. Steve Belovich

800ca77bf7ad76b2a830356569e524b7

This is the fifth installment of six part series on IT Security History and Architecture

(Part One) (Part Two) (Part Three) (Part Four) .

5.0 The Desktop Revolution (how RAM & disks got really cheap)

Contact5.1 Consumer Market Economics Limits Design Choices

While technological and business advances were going on in the mainframe and minicomputer world, the same thing was happening in the PC world – only with an evil twist. The microprocessor (which first appeared in 1970 with the TI 4040) allowed for the cheap introduction of home computers (e.g., TRS-80, Apple in 1976 and the IBM PC in 1980).

Cost dominated the retail market - as it still does. The cheapest, simplest design is the one that wins. Security, performance, etc. are all secondary to cost. The early personal computer O/S was made very simple and extremely dumb with bare minimum support for a file structure. In fact, QDOS (Quick 'n' dirty Disk Operating System) was the forerunner of DOS which led to Windows 3.0, Windows 3.11, NT 3.51, NT4.0, W2K, W-XP, Vista, etc. Of course, CP/M (Computer Program/Monitor) was out there in the early 1980s and many concepts embodied within DOS were taken directly from CP/M.

Because early PCs had to be cheap, they were necessarily extremely limited in capability and were initially 8-bit microprocessor-based (the Motorola 6502 for Apple and the Intel 8080 for the TRS-80). So nearly all of the advances in mainframe and mini technologies, e.g., interleaved RAM, hierarchical storage, multi-way, set-associative cache RAM, virtual memory management, multi-port I/O and multi-threaded applications were deliberately eliminated from the design of the PC. These features had to be eliminated for both cost and packaging considerations.

Applications (Apps) were also simple (and very dumb) and carried with them whatever run-time functions were needed. Apps also had to handle their own memory management because the O/S did nothing there. Early desktop O/S's (e.g., early versions of DOS) did not even have a print spooler! When you told an IBM PC to print in 1983, then that's exactly what it did - and nothing else. You had no control over the machine except to type CTRL-C and abort the print job.

5.2 System Security Deliberately Eliminated in PCs

ContactWhen the desktop operating system (O/S) was designed, all of the security concepts learned in the mainframe/mini world were tossed out because they were not required for the intended use of a home computer. More importantly, any “security procedure” required RAM, occupied disk space and used up CPU power – all of which were valuable resources at that time. Sadly, most “cyber kids” do not understand this history, which is partly why we are in the mess that we are in (more on that later).

The PC protection mechanism in the early days was physical: lock it up. In the early 1980s, there were metal frames made to hold early PCs and locks on the case to prevent physical access to the innards. Such mechanisms were easily defeated (I preferred the crowbar myself), but tampering would leave a physical trace.

The bottom line is this: the essential features of multi-user support, multi-user protection and system security were DELIBERATELY eliminated from the early desktop O/S design.

These features were not needed for early PCs because they were designed for the home market where there would be one user at a time and security was not a concern. Price and convenience drove the design and it still does today. Why take up extra RAM, disk and CPU cycles when the market did not need it, did not want it and would not pay for it? It was a no-brainer.

5.3 Early Desktop Security Focused on Anti-Piracy

imageIn the 1980s, “software security” became a concern because of software piracy or illegal software duplication. The big “PC Security” issue was focused on preventing the user from doing something to the “outside” (e.g., illegally copying an application).  Interestingly, PC security today tries to stop the “outside” from doing something to the user! How times have changed.

On the application side, “dongles” or hardware keys were used to ensure that the application would only run on the intended hardware. Of course, such devices did nothing for data protection nor for ensuring that the application would run properly, but those vulnerabilities were not considered threats at that time. The main purpose of hardware keys was to thwart copying of software and its illegal use on different hardware.

Further, the technology of the time helped prevent the copying of applications because CD/DVD RW devices did not exist and jump drives were only a dream.

5.4 Dial-Up Networking

ContactThe concept of networking home computers was unanticipated in the late 1970s and early 1980s. Networking originally consisted of remote users on “dumb terminals” using modems to access a bigger machine. The Bell 103 modem standard was the first and then Hayes became the dominant player and the famous “AT” modem commands became the de facto modem control standard. This allowed control and data to be shared on the same link, a concept called “in-band signaling”.

Later, PCs replaced the mainframe “dumb terminal” via terminal emulation programs (e.g., IBM 3270, DEC VT100, etc.) but access was still limited to dial-up connections to a larger machine. In the late 1970s and early 1980s, hooking up two PCs via some sort of network had no apparent purpose. PCs still had single-task operating systems (e.g., DOS 2.3) and Windows 3.1 was still in the planning stages. Robust network protocols were still experimental, the “physical layer” of the OSI (Open Systems Interconnect) model was still evolving and no stable networking technology existed.

5.5 The Invention of the Internet

In parallel with this was the development of the Internet, which was really born out of Dr. Leonard Kleinrock's work at MIT in the early 1960s, along with DARPA (Defense Advanced Research Projects Agency). That work was started in the 1960s and grew throughout the 1970s. It was called DARPANET at first and later on the “D” was dropped and became ARPANET. General Electric was also involved with GE Information Services Network, as was TYMNET, which were terminal-oriented and supported both interactive and batch processing.

The main networking goal in the early days was simply getting it to work! Early protocols were simple and some complexity was added later on to prevent errors such as lockups, and other early “denial of service” situations that had a variety of causes, including a lack of reassembly buffers for lengthy messages. The concept of a “store and forward” (S/F) network was explored and refined. This involved breaking up messages into manageable chunks called “packets”, transmitting them over the network and then reassembling them at the receiving point.

5.6 Early Network Protocols Ignore Security

imageThe engineering emphasis on those early network protocols was ease of connectivity, maximum utilization of expensive bandwidth and the reliability of the connection. Getting the entire network to operate correctly was the goal. All else was secondary. Security was not an issue and was largely ignored. The technologies used were intended to be convenient and easy-to-use so that hooking up to the network would be a quick and easy thing to do. The two main protocols that arose were TCP (Transmission Control Protocol) and IP (Internet Protocol). These were eventually merged and became what we now know as TCP/IP.

Security was not required for early networks because access was physically controlled. Also, the built-in access control mechanisms of the mainframe or central machine were well-established, well-understood and were the “guardians of the gate” to prevent unauthorized access. The network simply presented the access request to the mainframe (or minicomputer) and that machine had the responsibility of granting or preventing access. That worked fine for that time.

5.7 PC Operating Systems Had No Secure Foundation

While networking was improving, the initial O/S (Operating System) designs for PCs discarded or ignored the “mainframe/mini” concepts of shared resources, multi-user access, memory protection, multi-layer operation modes (e.g., kernel, executive, supervisor, user), user isolation, file-level access protection, ACLs (Access Control Lists), privileges, quotas, etc.

Implementing these concepts properly required CPU hardware features such as multiple instruction execution modes, multi-level interrupt control, memory management, cache control, multiple control and status register (CSR) sets, etc. Microprocessors did not have these features at that time because they were not required and the necessary on-chip “silicon real estate” was prohibitively expensive (I made wafers back in 1980 and the feature size was very large).

These engineering concepts were essential for a secure system because they allowed many users to share the computing resources (CPUs, RAM, disks, etc.) without interference. One user's mistakes did not percolate over into another user. The operating system handled the all the housekeeping and ensured that the entire computing system operated correctly even if an individual user did something stupid. The mainframe and minicomputer O/S was designed to detect and prevent that from happening and these technologies were maturing (c.f. Barry Schrager's work on RACF and AFC2 in the early 1970s at IBM).

imageThese critical engineering concepts were not included in the architecture of the home computer. Such technologies were very costly, memory-intensive, CPU-intensive and served no logical purpose for a home computer. More importantly, including these concepts in the desktop O/S served no economic purpose for the home computer. Price always drives the consumer market and there was simply no demand for such features.

The problem is that such features really need to be engineered in from the beginning in order to work properly. Adding them afterward is nearly impossible - and it has not occurred yet in the PC market. We are now living with the consequences of that.

5.8 Networking PCs Requires A Secure O/S

Once networking was expanded to include local networks of personal computers, then access control for personal computers became an issue. It now mattered who could access which computer and when they could do that. It now mattered who could access which specific resource of which machine and when. Privileges (what you're allowed to do) and quotas (how much of something you're allowed to use) now became important.

imageSo, the personal computer now needed a secure foundation and it just wasn't there. Usernames, passwords and some limited permission management were the best that could be done. However, such access control mechanisms were crude and easily defeated. There was no underlying security mechanism for the PC operating system and no easy way to add it either. The size of the installed base and the economics of the consumer market prevented the much-needed re-engineering of the desktop's operating system. Why bother to create it if you cannot sell it?

As an example, just adding proper “object marking” (a key requirement for a secure system) would require a brand new file system which would force the replacement of the entire installed base of PCs and software. By way of a benchmark, that installed base is about a trillion dollars.

5.9 The Fatal Flaw: Deploying Critical Stuff On A PC

Although the weaknesses of the desktop operating system were well-known early on, IT managers found the technology to be very attractive, convenient, easier to understand and cheaper. Mainframes and minis cost too much and required constant “care and feeding”. Worse, maintenance contracts were expensive and it took too long for someone to arrive when something malfunctioned. After all, if it worked at home, it should be fine for the enterprise, right? “Scale-up” just seemed easy - but it wasn't.

So, PCs migrated from the price-driven consumer market to the enterprise. Critical tasks such as accounting, payroll, invoicing and other operations were taken off of the mini-computer or mainframe and placed on the desktop. It was cheaper, more convenient and there were no monthly computer maintenance fees to pay. All in all, it looked like a smart move for business.

The problem with that move was that the requirements of a business are far different than those for an individual user. As business needs expanded, the demands on the desktop grew accordingly. Unfortunately, the desktop was not engineered to support those expanded demands. Rather, the desktop was engineered to meet the needs of the consumer market which wanted the machine for entertainment purposes, web surfing and social networking rather than for “traditional computing”.

Multimedia, audio processing, video handling, interactive gaming and the like were all critical requirements for the consumer market but did little for the business market. In fact some of the multimedia features actually caused new problems in the O/S as new releases came out.

Although the business market cared about that, the consumer market did not. Sales-wise, consumer PC sales outnumber business PC sales by nearly 1000-to-1, so that part of the market controls everything else – and it still does today.

Clearly, business requires more secure systems but the marketplace is not providing it because it's listening to the consumer side. Truly effective security is just not possible without fundamentally changing the desktop. That can't happen due to the size of the installed base and the corresponding economics that prevent change. So here we sit!

More to come....

© 2010 Dr. Steve G. Belovich, IQware, Inc., CEO

Possibly Related Articles:
9678
Network->General
Security Management
Post Rating I Like this!
Default-avatar
Rob Lewis The best chapter in your series so far.

I recall an interview with Marcus Ranum where he stated the huge implication of your post. With distributed computing came distributed risk; desktop owners did not have the skills, tools or inclination to manage security.
1282747469
6332a8f62e3a9d5831724f2ffe55cae0
Fernando Salazar Thank you for these articles! I look forward for the last chapter!
1282759528
800ca77bf7ad76b2a830356569e524b7
Dr. Steve Belovich Thanks, gentlemen. I re-read the interview with Marcus Ranum. We are all on the same page here.

An interesting fact: HP acquired Compaq which had already acquired DEC. DEC's OVMS O/S was - and remains - the only transaction-based O/S that has never been hacked when properly configured (c.f. DEFCON in 2001 & Kevin Mitnick's Congressional testimony).

DEC's OVMS mostly implemented The Reference Monitor (which I illustrate in Part 4 of the series). In other words, they built security in from the ground up.

BTW - a quick promo: Our company has built MDM & BI products on this O/S to deliver the highest security. The presentation layer uses XLIB so the client piece is ubiquitous and hyper-thin. The health care and other "regulated industries" (e.g., DoD, critical infrastructure, etc.) are our target markets because they have clear and costly legal liabilities for patient data exposure.
1282827771
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.