Welcome back to the next module of the advanced operating systems course. Recall that the Cornell experiment that we saw as the last piece of the previous module argues for a component based design to reduce the pain points in the development of complex software systems. And industries that are designing and commercializing production operating systems and distributed services through the client-server paradigm, there is another important pain point. And that is how to design for the continuous and incremental evolution of complex distributed software systems, both in terms of functionality and performance. The short answer to the puzzle is distributed object technology. We saw how object technology is employed in the Tornado parallel operating system as a structuring tool to allow the scalability of operating system services in a parallel system. In this module of the advanced operating systems course, we are going to see examples of how distributed object technology is influencing commercial offerings in the computer industry. We'll start this lesson module with the discussion of the Spring system, which was designed and implemented in Sun Micro Systems as a network operating system for use in a local area network. Later on, Spring was marketed as Sun's Solaris operating system. Before we discuss the Spring system, a little bit of history and some personal connection. Yousef Khalidi, one of the chief architects of the spring system, got his PhD from Georgia Tech in which is an object based operating system. And he was my numero uno PhD student incidentally. Not surprisingly, the Spring system was heavily influenced by Yousef's work with clouds. And Spring came out commercially as Sun's Solaris MC product. And for the trivia buffs out there, Yousef is now heading Microsoft's Azure Cloud Computing product. By the way, Azure has nothing to do with the cloud system that Yousef developed as a grad student at Georgia Tech. Later on, when we discuss giant scale services and cloud computing, we will feature an interview with Yousef wherein he shares his thoughts on future evolution of distributed system services.
Now back to our discussion of the Spring system at Sun. There is always a quadrum of how to innovate in the operating system. Academia is ripe for pursuing ideas that are on the lunatic French but, if you are an industry, you are always worried about, should we do a brand new operating system? Or do a better implementation of a known operating system. Research industry is usually constrained by the market place that it serves, specifically if you're a company like Sun Microsystems; which in its heydays, between 1980 and the 2005, it was making Unix workstations. And it was building large complex server systems which run 24/7 for a variety of applications, such as airline reservation and so on and so forth. And if you are in that marketplace, the question becomes should we build a brand new operating system or build a better implementation of a known operating system? Marketplace demand says that, well, there are legacy applications that are running on your current operating system and therefore building a brand new operating system may not be that viable in an industrial setting. So the approach they took in the Spring system at Sun Microsystems, is to be different but innovate where it makes sense. And, it is a sort of like the, you may have seen commercials that says, Intel inside, and the idea is in processor architecture, Intel is dominant and a lot of interesting computer architecture research happens in innovating under the covers in the micro architecture. So the external interface is still well known interface like the Intel processor but underneath they do a lot of innovation in the micro architecture. In a similar manner, if you are a company like Sun Microsystems that peddles Unix boxes and you want to retain your customer base, then you want to make sure that the external interface remains UNIX and external interface remains as UNIX. But under the covers, you innovate where it makes sense. And in particular, you want to make sure that everything that you do in the operating system allows third party vendors, to develop software against the new APIs that you may provide in the operating system and integrate that into operating system. While at the same time, making sure that such integration is not going to break anything. Or said differently, you want to preserve all the things that are good in standard operating system, but at the same time you want to make sure that the innovation allows extensibility, flexibility and so on. That's sort of the approach that Spring system took and for all the things that I just said, using object orientation is a good choice in order to make sure that we can do innovation under the covers, while keeping the external interface the same.
That brings us to a discussion of procedural design versus an object-based design. You're all familiar, I'm sure, with procedural design where you're writing your code as one monolithic entity, and in a procedural world, you have shared state in terms of global variables, and you may have private state in the caller and the callee. And state is now distributed all over the place. So, basically, the interface between the caller and the callee is through the normal procedure call mechanism that one sub-system may make a procedure call that goes into another subsystem. And this is how monolithic kernels are built where state is now strewn all over the place. There maybe some shared state and private state of subsystems and so on, and this is typically how monolithic systems are built. To contrast this procedural design to object-based design, in an object-based design, objects contain the state that is entirely contained within this object, not visible outside. And there are methods that are inside those object that manipulate the state that is part of this object. So in other words, externally, the state is not visible. The only thing that is visible are the methods for invocation and these invocations work on the state that is local to the object. So what you get with an object-based design is strong interfaces and complete isolation of the state of an object from everything else. Contrast that with the procedural design, where the state can be strewn all over the place, and the shared state can be manipulated from several different subsystems that are part of a big monolith. But in this case, what we have is, strong interfaces that completely separate one object from other objects. And the state that is specific to an object is contained entirely inside this object, invisible to other objects outside except via well-defined invocation methods. That have been exposed by this object implementor to the outside world. As OS designers, the immediate question that might come up is, well, if we have these strong interfaces, it sounds similar to what we discussed when we talked about the structure of operating systems early on, and that is border corssing across protection domains. Is it going to cost us? But, there are ways around it. To make these border crossing performance conscious as well. Now, where to apply this object orientation? Well, in Spring, for instance, they applied object orientation in building the operating system kernel. So the key point to take away is, if object orientation is good at the level of implementing a high performance operating system kernel, it should be good at higher levels of the software as well. And while I am expounding the virtues of object-based design here, we have already seen this when we talked about Tornado system. That was also using an object-based approach to building operating system kernels.
So, the Spring Approach to building operating system is to build strong interfaces for each sub-system. What that means is, the only thing that is exposed outside a sub-system is what services are provided by the sub-system but not how. In other words, the how part of it can be changed at any time, so long as the external interface remains unchanged. So that is what is meant by strong interfaces, and this naturally leads to object orientation. And they also wanted to make sure that the system is open and flexible. And this is important if you're an operating system vendor and you want to integrate third party software into your operating system. You want to make sure that your interfaces are open and flexible and at the same time, you want to maintain the integrity of your subsystems, and that's why strong interfaces are extremely important. And being open and flexible also suggests that you don't want everything to be written in one language. You don't want to be tied to a particular language for implementing all the system components, and this is the reason that in Spring they chose to use IDL, which is the interface definition language, and this is from the OMG group. There are IDL compilers that are available from several third party software vendors, and what that allows you to do is, you can define your interfaces using IDL. And third party software vendors can use that IDL definition of the interfaces and use them in building their own subsystems that can integrate with the Spring system. And the other part of a Spring approach is extensibility and extensibility naturally leads to microkernel based approach and that's what you see here. This is the structure of the Spring system and what you see below this red line is Spring's idea of a microkernel and in fact there are two parts to it. There is a nucleus, which in Spring is the entity that provides the abstractions of threads and interprocess communication among the threads. And the kernel itself is made up of nucleus plus the virtual memory manager. So if you have put these two things together, the nucleus gives you threads and IPC and the VM manager gives you memory management. And if you remember back to our good old friend Liedtke's principle of what a microkernel should provide. You see that what is below this red line is exactly Liedtke's principle that is the microkernel is providing the abstraction of threads in IPC and an abstraction of memory. And everything else is outside the kernel. All the things that are above the red line are outside the kernel, and, in particular, I mention that Spring is Sun Microsystems' answer to building a network operating system. Because this is a time when transitioning was happening to services that are being provided on the network. And so, they wanted to go from an operating system that runs on a single node to a network operating system using the same interface. Namely the Unix interface, and so this entity that you see here, which is called the network proxy. We'll see that, more of it, in later discussion in this lesson. This is the entity that allows machines to be connected to one another. All the ovals that you're seeing that are outside the kernel provide different services that you might need in your desktop environment. For instance an X11 server is a display manager and you may need ability to do shell level programming, and you need file system, and you need a way by which you can communicate in the network, meaning that you need a protocol stack.
Nucleus is micro kernel of a spring and it is subset of leap case prescription as I mentioned just now. In the sense that, nucleus manages only threads and IPC. The abstractions available in nucleus are the following. There is this domain. A domain is similar to Unix process, it's a container, or an address space, and threads can execute in a particular domain. These threads are similar in semantics to P thread that we have seen before, and this abstraction called door, is a software capability to a domain. It's you can think of it as a real life analogy of. Opening a door in order to get into a room. In a similar manner, if you have a handle to the door you can open the door and enter a target domain. So that's the idea behind door. So, any domain can create these nucleus entities called doors, which are essentially entry points for entering the target domain. With the object orientation, I told you that the only thing that you can do is make invocations on objects, and the entry points available and the objects that are contained in a domain are represented by this abstraction called door, that's provided by nucleus. Let's say, I'm a file server. What will I do? Well I have entry points in my file server, such as, opening a file, or reading a file, writing a file, and so on. Basically, I will create those entry points as doors into my domain. And if I'm a client, how do I get access to the entry point that's available in the target domain? Well, the way I do that is exactly similar to how you may be opening your file in a Unix file system. What you do is an f open, and when you do that, you get a file descriptor, which is a small integer that is a handle for you to access that file. In a similar manner. If I'm a client and if I want the ability to invoke a target domain, a particular entry point, then what I want is an access to this door and the way I get that is by getting a door handle. So I get a door handle. So every domain will have this door table, which is similar to the file descriptors that you may have in a Unix process. And every door ID that you have in this door table points to a particular door. If I have a door handle in my door table for a particular door, what that tells me is that, oh, I have the ability. To, make an invocation in the target domain that this particular door corresponds to. So the possessor of a door handle, is able to make object invocations on the target domain using this door handle. And as you can see, a particular client domain can have a door table that has access to several different target domains. So in this case, these two entries in my door table points to this door, which probably are entry points into this target domain. And other door, which are a different set of entry points, and I have access to them as well. And multiple clients may have access to the same door, because if it's a file system, for instance, you may be able to access the file system, I may be able to access the file system, and so on. So, the door table is something that is unique to every domain and it gives that domain an ability to access the entry points in the target domain, so that they can make object invocations. So, the way to think about this door, it's basically a software capability to a domain. Since we are using object orientation, it is represented by a pointer. To a C++ object that represents the target domain. And door can be passed from domain to domain but it is a software capability and it can be passed from domain to domain, and when it is passed from domain to domain it gives the ability for those domains to actually get access to the entry points specified through the door, to the target domain. And the spring kernel itself is a composition of the nucleus plus the memory management, that is inherent in the fact that these domains represent an address space. Now, how do you go about making an object invocation, that is, you want to make a protected procedure call into a target domain from a client domain. How do I do that? Well, the nucleus is involved in every door call, so they won't open the door. I need the permission of the nucleus. And what I do is, when I make the invocation using the small descriptor that I have, which is a door handle, the nucleus looks at it, says okay this domain has the ability to do this invocation. And it allocates a server thread on the target domain, and executes the invocation that is indicated by this particular door handle. It's a protected procedure call, and since it is procedure call semantics, the client thread is deactivated, and the thread is allocated to the target domain, so that it can execute the invocation for the method that is indicated by this door handle. And on return from this target domain, once that protected procedure call is complete, the thread is deactivated. And the client thread is reactivated so that the client can continue with whatever it was doing before. So, this is very very similar to the communication mechanism that we discussed in the lightweight remote RPC paper before, in the sense that, we're doing very fast cross address space calls using this door mechanism. This protected procedure call is in illustration of how nucleus makes sure that even though it has an object based design and it is using object orientation in the building of the, in the, it is using object orientation in the structuring of the operating system kernel. It ensures that it'll still be performant, in the sense that you can do this cross domain calls very quickly through this idea of deactivating the client thread and quickly activating the thread to execute the entry point procedure in the target domain and on return reactivating the client thread. And all of this results in very fast cross address space calls through this lower mechanism. That's how you make sure that you get all the good attributes of object orientation and not sacrifice on performance at the same time.
As I mentioned, Spring is a network operating system. So, what I described to you just now, is how object invocation works within a single node. But these doors are confined to the nucleus on a single node. And we need to be able to do object invocation across the network. The client domain may be over here and the server domain may be on a different node on the local area network. Object invocation between client and server across the network is extended using network proxies. For example, on the client box there is this Proxy B and on the server box, there is the Proxy A and proxies can be potentially different for connecting to different servers. So, this client may talk to this server using this proxy. And may talk to a different server, which I'm not showing here using a completely different proxy. In other words, the proxies can potentially employ different protocols. That's where you have the opportunity to specialize. Whether the communication that's happening between the client and server is on the local area network or on a wide area network, and so on. Depending on that, you can employ the protocol that is appropriate for use in the proxy. So, this is a key property of building a network operating system in Sun where they wanted to make sure that decisions are not being ingrained in the operating system of a single node, in terms of the connectivity of that node to other nodes on the network. Depending on where the servers for a particular client is going to be maintained, that is where the location of the server is. You can employ different protocols to talk between the proxies that are on the client machine and the server machine. And also the proxies are invisible to the client and the server. In other words, the client and the servers are unaware whether they are both on the same machine or on a different machine, and they don't care. Let's see how this client-server relationship is established using these proxies. So when a client-server connection has to be made across the network. The first thing that happens is, you instantiate a proxy on the server node and establish a door for communication between the Proxy A and the server domain through the nucleus on the server machine. And now what does Proxy A is going to do, is to export a network handle embedding this Door X to its peer proxy, B that is on the client domain. And see that this interaction that's going on between Proxy A and Proxy B is outside of anything that is in the preview of the nucleus. So the network handle that is being established has nothing to do with the primitives or the mechanism that are available in the nucleus of the Spring system. So what proxy is doing, is to create a network handle embedding this Door X. And it is going to export that to this Proxy B and Proxy B has a door that it has established locally on Nucleus B so that the client domain can communicate with it. And now what Proxy B will do, is it will use the network handle that has been exported by Proxy A to establish a connection between the two nuclei. So this network handle and the communication that goes on between these two guys is not through the nucleus. That's important for you to understand. So now, how does the client make an invocation on the server domain? Well, when the client wants to make an invocation, it thinks that when it is accessing Door Y, it is accessing the server's domain. But it isn't. What it is. What it is accessing, is this Proxy B and of course access to this Door Y, which is in Proxy B, is blessed by Nucleus B, and when this invocation happens, Proxy B then is going to communicate through this network handle that it has with its peer Proxy A. And the peer Proxy A, when it gets this client invocation proxied through this Proxy B and arriving at Proxy A, will know that oh, this is really intended for the server domain. And I know how to access that through the door that I have in the server domain, and it uses the door it has in the server domain in order to make the actual invocation. So to recap, what is really going on, the client wants to open this Door X. It doesn't have a direct handle on Door X because server domain is in a different node of the network. And therefore, the way remote invocation is accomplished, is by the server domain's door which is the entry point into the server domain, is passed on by this proxy via a network handle to its peer proxy on a different node, in this case the client node. And once this network handle is available to Proxy B, it can establish the connection between these nuclei, and once this connection is established. Then the client domain, it thinks it is making an invocation call for Door X, but in fact it is being passed through Door Y to this proxy. And the Proxy uses a network handle to communicate that invocation over to Proxy A which then uses the actual door that will open the invocation call under server domain and execute the client domain's call.
It may often be necessary for a server object to provide different privilege levels to different clients. For instance if you have a file server the file server may have different access privileges to different classes of users. And in order to facilitate that kind of a differential invocation of objects, the security model that Spring provides is via what is called a front object, so this is the underlying object. And an underlying object may have a front object that is completely outside of the Spring semantics for object invocation. The connection between the front object and the underlying object is entirely within the purview of the implementer of the service. In other words, this connection is not through the door mechanism that I told you about that Spring system provides you. So, all that the client domain is going to be able to do is access the front object. And the front object will register the door for accessing it with the nucleus, so that the client can go through this door to this front object, and the front object is the one that is going to then check the access control list, ACL, in order to see what kind of privileges this client domain has in order to make an invocation on the underlying object. And it is possible to have multiple front objects to the underlying objects with distinct doors registered with the Nucleus for different implementation of control policies that you want for a particular service. So, in other words, the policies that you want for accessing the services available in an underlying object can be implemented in this front object or different instances of this front object depending on how many different control policies you want. So when a client invocation comes in through this door to the front object, the ACL, the access control list is checked before allowing this invocation to actually go through to the underlying object. As I mentioned earlier if this client domain has access to an invocation entry point in a server, that is it has access to a door, the client domain can pass this around because of the software capability. And the software capability can be passed around by the client domain to other domains in order to use that same capability to access the same object. But in so doing the client domain can decide whether it wants to give the same privilege for accessing this object or lesser privilege than what it has. Those are the things that can be implemented as policies through this front object. For example, let's say that the user wants to print a file foo. The user, of course, has full access to the file system for this particular object, that is the file that the user has created. This is a reference to the object foo and user has full access to that. But it wants to print the file. But it doesn't want to give privilege to the printer object any more privilege than it needs to have to print this. In particular, if I want to print a file, then all I need to do is give a one-time privilege to the printer object in order to print that file. So what I'm going to do is I'm going to take this capability that I've got for this file foo, reduce the privilege level and say that you've got a reference to the same object, but you have a one time reference. Now the printer object can access the file system and present its capability, and the front object, which is associated with the file system, will verify that yes, the one-time ticket that this guy has is not expended yet, and therefore it is allowed to access this file so that it can do its job of printing. But if it tries to present the same handle again, it'll be rejected by the front object associated with the file system because this is a one-time reference. The capability that is being provided by the user the printer is a one-time capability. So we've seen how object invocation can happen efficiently through the door mechanism and the thread hand-off mechanism that I mentioned within a single node, and it can happen efficiently across the network through the proxies, and it can also happen securely by the fact that you can associate policies in front objects that govern access to the objects. So these are all the mechanisms that are provided in the Spring kernel and this is where the innovation happens. Or in another words, the external interface, even though it is a Unix operating system, under the cover the Spring system does all of these innovation in terms of how to structure the operating system itself using object technology.
This question concerns the abstractions that is available in Nucleus. Remember Nucleus is the microkernel of Spring. And the question asks, what is the difference between the primitives, or the abstractions available in Nucleus, and Liedtke's prescription for what a microkernel should look like? And these are the features that I am talking about. The first feature is the abstraction of threads, second is interprocess communication, and the third is address space. And what I want you to do is fill this table in terms of, what are the abstractions that I have mentioned here available in Nucleus versus Liedtke's prescription for a microkernel
If you've been with me so far, you know that nucleus is providing only threads and IPC. It doesn't provide the abstraction of an address space. Because this is implemented inside the kernel of the Spring system. But, it is not what Spring calls as X micro-kernel because nucleus that contains only the threads and the IPC. Whereas Liedtke's description says you should have all three in the microkernel, and in fact, the Spring system does have it. It is just that in the Spring system they name things differently. They call nucleus their microkernel, but the idea of a kernel in the Spring system contains all three entities, even though the nucleus doesn't contain the address space.
So virtual memory management is part of the kernel of spring, and now we will talk about how virtual memory management happens in the spring operating system. There is a per machine virtual memory manager, and the virtual memory manager is in charge of managing the linear address space Of every process. As we know, the linear address space of a process is what the architecture gives you, and what the virtual memory manager does is to break this linear address space into regions. And you can think of regions as a set of pages. So you take the linear address space given by the architecture, that's the process address space Break that up into regions, but each region is a set of pages. And each region can be of different sizes. The second obstraction in the virtual memory management system is what is called a memory object, and the idea of breaking up this linear address space into regions. Is to allow these regions to be mapped to different memory objects. So, for instance, this region is mapped to this memory object. This region is mapped to a portion of this memory object. And these two regions Different regions of the same address space are mapped to the same memory object and this is perfectly fine. So, this is how the Virtual Memory Manager takes the linear address space and maps it to these memory objects. And what are these memory objects? The abstraction of a memory oject allows a region of virtual memory to be associated with a backing file. Or it could be the swap space on the disk, and things like this. So this memory object is the mechanism by which portions of the address space can be mapped to different Entities, which maybe on the dis as swap space or files in a file system. All of those are available to the abstraction of the memory object, so that regions in an address piece can be mapped to the backing entities. And it is also perfectly possible that multiple memory objects may map to the same backing file that is also perfectly possible. so the way to think about these abstractions is linear address space broken into regions, regions mapped to memory objects, and memory object is an abstraction for Things living on backing store, meaning a disc. It could be the swap space on the disc, or it could be specific files that are being memory mapped in order to access from a process address space. Those are the abstractions available in the virtual memory management system, now we'll see how these memory objects Are then paged in and brought into the physical memory.
So here is a virtual memory manager and it is responsible for an address space that it is governing and this is the guy that is going to worry about breaking a linear address space into regions and mapping those regions to specific memory objects. For a particular process that is living in an address space to access a particular memory object. Obviously, this memory object has to be brought into DRAM and that is what a pager object is going to do. Which is equivalent to the idea of what is called external pages in other systems, such as Mark. A pager object is responsible for making or establishing the connection between virtual memory and physical memory. And a portion of the virtual memory that is a region of the linear address space has been mapped to this memory object, and it is the responsibility of this pager object to make sure that this memory object has the representation in the physical memory when the process wants to access that portion of the address space range that corresponds to this memory object. So this pager object creates what is called a cached object representation for the memory object in the DRAM. So now, the portion of the address piece, that is the region of the address piece That this virtual memory manager mapped to this memory object one becomes available for the process to address in its DRAM because of the work done by this pager object in mapping this memory object into this DRAM. Similarly, a different virtual memory manager object. Managing a different address space can similarly map another memory object and clear a cache representation for this address space to map a region of its address space to this memory object using this pager object. I mentioned that the address space manager can make any number of such mapping between regions of the linear address space and memory objects. For instance, there's another region of the linear address space that is mapped to this memory object, too, and there may be a pager object that governs the paging of this object into a DRAM representation. So there's a cached object representation for this memory object which is part of the region Of the linear address space of a particular process managed by this VMM1. So in this example, this pager one is a pager for two distinct memory objects, memory object one and memory object two, and which are cached by VMM1 on behalf of a process. So there are two pager objects. One for each one of these things. So the important point I want to get across is that there's not a single paging mechanism that needs to be used for all the memory objects. So it gives you an ability to have different regions of the linear address space of a given process by associating different pager objects with each of the regions that correspond to a particular memory object. And all of these associations between regions and memory objects can be dynamically created. So for instance, this address space manager may decide to associate a region in this linear address space to this memory object. If it does that, then there is a new pager object that. Is going to manage their association between the region of the virtual address space that is mapped to this memory object three and the cached object representation is the DRAM representation of this memory object created by a pager object that is managing the relationship between this region. And this particular memory object three. Now this is an interesting situation, because you have a memory object that is shared by two different address spaces. And there are two distinct pager objects associated with managing The region of the address space in VMM 1 that maps to this memory object, and the region of the address space in VMM 2 that maps to the same memory object. Now what about the coherence of the cache representation of this object that exists over here? And the cached representation of this object that exists over here. Who manages that? Well it's entirely up to the pager object [INAUDIBLE] instantiated. In order to manage the mapping between this memory object and the cached object. So if coherence is needed. For the cache representation of this memory object in the DRAM of this address space and this address space, then it is a responsibility of these two pager objects to coordinate that. So it's not something that string system is responsible for, but it provides the basic mechanisms through which these entities can manage the regions that they are mapping. In terms of the memory objects and the DRAM representation of those objects. So in other words, external pagers establish the mapping between virtual memory, which is indicated by these memory objects, and physical memory, which is represented by the cached objects. So in summary, the way memory management works in the spring system is, the address space managers. Are responsible for managing the linear address space of a process, and they do the mapping of the linear address space of a process by carving them up into regions. And associating the regions with different memory objects, and these memory objects maybe swap space on the disk. Or it could be files that are being mapped into specific regions of the linear address space. Entirely up to the application, what they want to do with it, but these abstractions are powerful for facilitating whatever may be the intent of the user. And mapping the memory objects to the cache representation, which lives in DRAM, is the responsibility of pager objects. And you can have any number of external pages that manage this mapping. And in particular, through this example I've shown you that you can have, for a single linear address space, multiple pager objects that are managing different regions of that same address space. And that's the flexibility and power that's available in the structure, of the spring system, using the object technology.
So to summarize the facilities of the perimeters available in the spring system. Object orientation, object technology permeates the entire operating system design. Its used as a system structuring mechanism in constructing a network operating system. To break it down, in the spring system you have the nucleus which provides you threads. And IPC among threads. And the microkernel prescription of lead k is accomplished by the combination of nucleus, plus the address space management that is part of the Spring System's kernel boundary. And everything else lives above. This kernel, meaning all the services you normally associate with an operating system such as file system, network communication and so on, were all provided as objects that live outside of this kernel. And the way you access those objects is through doors. And in every domain there is a door table that has a set of capabilities that a partiuclar domain has for accessing doors on different domains. And this door and door table is the the basis for cross domain calls. And through the object orientation, and through the network proxies you can have object invocation implemented as protected procedure calls both on the same node and across machines. And finally, it does virtual memory management by providing certain basic parameters, such as the linear address space, the memory object, external pages, and cached object representation. Now to contrast this to Tornado. In Tornado also we saw that it was using object technology, but the contrast is pretty distinct. In Tornado it uses clustered object as an optimization for implementing services. For example, weather a particular object is singleton representation, or it has multiple representation for each processor, etc. Those are the kinds of optimizations that are being accomplished using the clustered object in the Tornado system. Whereas in the Spring system, object technology permeates the entire operating system design in that it is used as a system structuring mechanism, not as just an optimization mechanism. In constructing a network operating system.
Spring is a network operating system and the clients and the servers can be on the same machine, can be on different nodes on a local area network and in the Spring system, what they wanted to do was this idea of extensibility. They wanted to carry it to saying the client and the server should be impervious to where they are in the entire network. So the interaction should be freed. Or in other words, the client server interaction should be freed from the physical location of the clients and the servers. So for instance, in this picture, the clients and the servers are on the same machine. We've decided to replicate the servers in order to increase the availability, and now we have several copies of the servers and the clients are dynamically loaded to different servers for load distribution. And for those of you who are familiar with, you know, how services like Google work today, this is exactly what happens in services that we use on an everyday basis when we access Google. Our client requests are being routed to different servers and this is the same sort of thing that is happening in the Spring system that once you replicate the server, you want the client request to be routed to different servers depending on the physical proximity of the client to the servers, as well as the load that is currently being handled by one server versus another. Another variation of the same theme is where the server is not replicated, but the server is cached, for instance if it is a web server. Then there could be a proxy for the web, web server that is cached, and in that case the client request need not go to the origin web server, but it can go to the cached copies that are available. And so here again this decision of routing a client request to a particular cached copy of the server is dynamically taken. Not all of this sounds like magic in terms of how this client server relationship is being dynamically orchestrated, whether are in the same machine, or whether we dynamically decide to replicate the servers and decide to route the request to different servers, or we want to cache the servers and route the client request to different cache copies. All of these are dynamic decision that are taken. And how is this done? Well that's the part that we're going to see next.
The secret sauce that makes this dynamic relation between the client and the server possible is this mechanism called subcontract. It's sort of like the real life analogy of off loading work to a third party, you know, you give sub contract to somebody to get some work done. That's the same analogy that is being used here, in the structure of the Spring network operating system. I mentioned earlier, that the contract between the client and the server is established thru the IDL. That is the Interface Description language, used to create the contract between the client and the server. And the subcontract is the interface that is provided for realizing the IDL contract between the client and the server. So here is the IDL interface and the client is using the server's IDL interface to make invocation calls on the server. An implementation of this IDL interface is accomplished through the Subcontract mechanism. Put differently, subcontract is a mechanism to hide the runtime behavior of an object from the actual interface. For instance, there could be a singleton implementation of the server, or it could be a replicated implementation of the server. The client does not care, and does not know. And all of the detail of how this client's IDL interface is satisfied is in the details of the sub contract itself. So what that means is, the client side stub generation becomes very simple because all of the detail of where the server is. How to access the server? Whether the server is on the same machine or on a different machine and are there multiple copies of the server? Which copy of the server should I go to? All of those details are in the subcontract mechanism. That makes the life of client side stub generation very, very simple. So subcontract lives under the covers of the IDL contract and you can change the subcontract at any time. So, for instance, if you don't like the work being done by one contractor, you give it to a different subcontract. Same sort of thing that can happen here is that the subcontract is something that you can discover and install at runtime. So, in other words, you can dynamically load new subcontracts. For instance, if a singleton server got replicated, then you get a new sub-contract that corresponds to this replicated server, so that now you can access the replicated servers using the subcontract. And nothing needs to change above this line. The client stub doesn't have to do anything differently. All of the details are handled by this subcontract, seamlessly. So in other words, you can seamlessly add functionality, to existing services, using the sub contract mechanism.
Now let's look at the interface that's available for the stub that is on the client side and the server side through the subcontract mechanism. The first interface, of course, is for marshaling and unmarshaling. So the client side stub has to marshal the arguments form the client and in order to do that, it has calls that it can make on the subcontract saying that marshal these arguments for me. The subcontract will do that for you. Depending on whether this invocation is going to go to a server that is on the network or is it on the same machine. Or, is it on different processor on the same machine? All of those details are buried in the subcontract. And therefore, when the client stub wants to marshal the arguments for a particular invocation, it just calls the subcontract and says please marshal these arguments for me, and the subcontract knows the way in which this particular invocation is going to be handled, and so it can then do the appropriate thing for marshaling the arguments based on where the location of the server is. That's the beauty of the subcontract mechanism, and this is true on the server side as well as on the client side. And once the marshaling has been done, the client side can make the invocation. And when it makes the invocation, once again the subcontract says I know exactly where this particular invocation is going to go to. So it takes care of that. So the subcontract on the client side has this invocation mechanism obviously because the client is the guy that is going to make the invocation. On the service side the subcontract gives a different set of mechanisms. It allows the server to revoke a service, or it allows a server to tell the subcontract that yes, I'm open for business by saying I'm ready to process invocation requests. So what you see is that the client side and the server side, the boundary is right here. The client stub and the server stub don't have to do anything differently, whether the client and the server are in the same machine or in a different machine. Replicas of the machine, cache copies of the machine, none of those things make a difference in terms of what the client, when I say client I mean the client application plus the client stub, and similarly, the server plus the server stub, they don't have to do anything different. All of the magic happens down below in the subcontract mechanism. So to recap, the innovations in the spring system. It uses object technology as a structuring mechanism in building a network operating system and it ensures through the object technology that it is providing strong interfaces, it is open, it is flexible, and it is also extensible because it is not a monolithic kernel. It has a microkernel, and all the services are provided through these object mechanism living on top of the kernel. And the other nice property is that the clients and the servers don't have to know whether they are colocated on the same node or they exist on different nodes of the local area network. And object invocations across the network are handled through the network proxies. And the subcontract mechanism allows the client and the servers to dynamically change the relationship in terms of who they are talking to. You can get new instances of servers instantiated and advertise that through the subcontract mechanism so that the clients can dynamically bind to new instances of servers that have been created without changing anything in the client side application or the client side stub. So those are all the powers that exist when you decide how to innovate under the covers, which is exactly what Sun did with the spring system.
The journey in this lesson should have given you a good idea of how it is possible to innovate under the covers. Externally, Sun was still peddling UNIX boxes, but internally they had completely revolutionized the structure of the network operating system through the use of object technology. If fact, the subcontract mechanism that Sun invented as part of the Spring system forms the basis for something that many of you who are Java programmers are using a lot, namely Java RMI. In this lesson that you're going to look at, we are going to study Java RMI and also Enterprise JavaBeans.