Subscribe by Email


Sunday, June 30, 2013

Explain the single and two level directory structures

About Directory Structure
- Directory structure is referred to the way that the operating system follows for displaying the files and file system to the user in the field of computing. 
- A hierarchical tree structure is used for displaying the files in a typical way. 
- The special kind of string the file name uses or the unique identification of a particular file that is stored in the computer’s file system. 
- Before the 32 bit operating systems actually came in to the scenario; short names of about 6 to 14 characters in size were used for the file names. 
However, the modern operating systems give permission for file names of longer length i.e., of 250 character and that too per path name element. 
- The drive:\ is the root directory in the operating systems such as the OS/2, windows and DOS for example, “C:\”. 
- The “\” is the directory separator but the forward slash “/” is also internally recognized by the operating system.
- A drive letter is use for naming the drives either physically or virtually. 
- This also implies there does not exist a root directory that is formal. 
- Rather, we have root directories in each drive that are independent of each other. 
- However, one virtual drive letter can be formed by combining in to one. 
- This is done by keeping a RAID setting of 0 for the hard drive. 
- The file system hierarchy standard is used by the operating systems such as the UNIX and other UNIX like systems. 
- This is the most common form for the directory structures used by the UNIX operating systems. 
- It is under the root directory “/” that all the files and the directories are stored even if they are actually present on a number of different physical devices.

About Single – level Directory
- This is the simplest of the directory structures. 
- All files are stored in the same directory itself because it is quite easy to understand as well as support. 
- The first computer of the world i.e., the CDC 6600 also operated on just one directory and it could be used by a number of users at the same time. 
- There are significant limitations of the single-level directory. 
- These limitations come in to play when there are more than one users using the system or when the system has to deal with a large number of files. 
- All the files have to be assigned unique names since they are all stored under the same directory. 
- No two files can have the same file name. 
- It may become difficult to keep the names of the files in mind if they are large in number.


About Two–level Directory
- The limitations of the single level directory structure can be overcome by creating an individual directory for every user. 
- This is the most standard solution for the problems of the single level directories. 
- In this two-level directory structure, a UFD or user file directory is made for every user. 
- The structure of all the user file directories is almost the same, but the difference is that only the files of the individual user are stored in one.
- When a user tries to log in or when he starts a task, the system searches for the MFD or master file directory. 
- The name of the user or his/ her account number is used for indexing the MFDs in the operating system. 
- Each of those entries points to the UFD belonging to that user. 
- When a reference is made to some file, the system only searches for the user file directory for example, when a file has to be deleted or created. 


Dot / Patch release: Estimating the features where there is a change required

One of the previous posts I wrote about was that of a Dot / Patch release (Estimation of dot / patch release) where there was an effort to show what are some of the reasons for a dot release or a patch release and the broad level points about how to estimate the amount of overall effort needed as well as the generation of schedule needed for the release. This needs to be done even if you already have a hard date for the release of the patch / dot release, since if the generated schedule is going far beyond the required date, then there is something wrong which would need to be solved (possibly by adding more resources, although that cannot be done in every case).
This post talks about actually doing an estimation of the changes required by working on the different features to see the overall impact of the change. Consider a case where a dot release has to be made for the application, with a change made in one of the features for a major defect. Now, the dot release has to be made in a couple of months, with the exact release date being decided based on the estimates for the different areas of the release (development effort, testing, localization, etc). One of the starting points for this estimation effort is figuring the effort needed for development, and for that, there needs to more detailed investigation of what areas need development effort.
How do you do this ? The first and most important starting point is to ensure that you have an experienced developer to do the investigation and estimate. The actual effort can be done by somebody in the development team, but the investigation should be done by a senior and experienced developer. How do you start:
- First, make sure that the developer has been given details of the change needed in terms of requirements
- Give the developer some time to discuss with other team members about the areas which are impacted.
If this is a core area, then the developer would need to go across the application to determine the impact. For example, there may be a need in the user security framework, and that would spin across all the application. If the change is localized to a specific area, then it is easier to make the effort estimate of the change. There is no great complexity in this area, as long as the developer does a thorough job.
- The senior developer should spend time with the actual developer who is going to work on the release and make sure that he has enough understanding of the changes required, and depending on the skill of the developer assigned, make sure that the correct estimate is given (which may included a required buffer).
- The developer also needs to ensure that he has spent time with people from the localization development team and the installer / release teams so that he has an idea of the amount of time needed from their side and can also include these in the effort estimate.
- The developer needs to spend a fair amount of time with the testing team so that they have a good understanding of all the changes that are going to happen in the release as well as the actual impact of the change including all the areas of the application that are impacted. 


Saturday, June 29, 2013

What are the reasons for using layered protocols?

Layered protocols are typically used in the field of networking technology. There are two main reasons for using the layered protocols and these are:
  1. Specialization and
  2. Abstraction
- A neutral standard is created by a protocol which can be used by the rival companies for creating programs that are compatible. 
- So many protocols are required in the field and that should also be organized properly and these protocols have to be directed to the specialists that can work up on these protocols. 
- A network program can be created using the layered protocols by a software house if the guidelines of one layer are known. 
- The services of the lower level protocols can be provided by the companies. 
This helps them to specialize. 
- In abstraction, it is assumed that another protocol will provide the lower services. 
- A conceptual framework is provided by the layered protocol architecture that divides the complex task of information exchange into much simpler tasks between the hosts. 
- The responsibility for each of the protocols is narrowly defined. 
- A protocol provides an interface for the successive higher layer protocol. 
- As a result of this, it goes in to hiding the details of the higher protocol layers that underlies. 
- The advantage of using the layered protocols is that the same application i.e., the user level program can be used by a number of diverse communication networks.
- For example, when you are connected to a dial up line or internet via LAN you can use the same browser. 
- For simplifying the networking designs, one of the most common techniques used is the protocol layering. 
- The networking designs are divided in to various functional layers and the protocols are assigned for carrying out the tasks of each layer. 
- It is quite common to keep the functions of the data delivery separate from each other and separate layers for the connection management too.  
Therefore, we have one protocol for performing the data delivery tasks and second one for performing connection management. 
- The second one is layered up on the first one. 
- Since the connection management protocol is not concerned with the data delivery, it is also quite simple. 
- The OSI seven layer model and the DoD model are one of the most important layered protocols ever designed. 
- A fusion of both the models is represented by the modern internet. 
- Simple protocols are produced by the protocol layering with some well defined tasks. 
- These protocols then can be put together to be used as a new whole protocol. - As required for some particular applications, the individual protocols can be either replaced or removed. 
- Networking is such a field involving programmers, electricians, mathematicians, designers, electricians and so on. 
- People from these various fields have very less in common and it is because of the layering that people with such varying skills to make an assumption or feel like others are carrying out their duty. 
- This is what we call abstraction. 
- Protocols at a level can be followed by an application programmer via abstraction assuming that network exists and similarly electricians assume and do their work. 
- One layer can provide services to the succeeding layer and can get services in return too. 
- Abstraction is thus the fundamental foundation for layering. 
- Stack has been used for representing the networking protocols since the start of network engineering. 
- Without stack, it would be unmanageable as well as overwhelming. 
Representing the layers of specialization for the first protocols derived from TCP/ IP.



Being able to estimate effort needed for smaller releases (patch / dot release)

One of the most difficult tasks to estimate are the smaller projects that a team needs to do. Consider a long project, the typical new version release of a software (for a typical long example, consider the schedule for the release of a new version of Microsoft Office - is it always typically more than 1 year); now even when the team is busy doing this release, there will be always be the need for doing a dot release or a patch at the same time (or even multiple such releases). Why would this happen ?
- There could be a security issue for which an urgent patch needs to be released,
- There could be some new feature that is meant to be pushed through before the next release
- Because the release schedule is long, defects that are found during the next few months are collected and released in an interim release that is then released
- And this is one of the most interesting reasons for a release - I know several people who will not take a new version of a Microsoft until a service pack has been released, since that is when it would have stabilized - so this puts pressure on a company to do a release within a few months

With all this, who does these interim releases ? For products where there is already a definite schedule of interim releases, a small sub-team can be setup to take care of these releases. Such a team will soon gain expertise in doing these smaller releases, and can work on the main release if there are no interim releases ongoing. However, for teams where there is no definite schedule of release of these interim releases, it does not make sense for a team to put dedicated folks to work on the product. Even more, in my experience, when people are assigned to such interim releases, there is a need to rotate the people working on such releases, since these are seen as essential but maintenance, not with the excitement of working for something new.
Now, once people are assigned to do the project, there is a need to figure out the schedule for such a release. However, in most of these cases, the end date is already fixed - the Product Manager already has a date in mind about when these releases need to be released. But, the team still need to define the estimate and a probable schedule in mind:
- Whether this be a dot release or a patch (a patch is a small set of files that is downloaded and can be installed, a dot release is typically the full installer with a few changes in the files). The advantage of having a dot release is that it can replace the original installer
- Define the change that is happening (this typically means the files and features that are being changed, and the estimated impact of this change)
- The amount of time needed for the development team to make the required changes
- The amount of time needed for the testing team to test the area and surrounding areas (this would also include the installer created for the release)
- Typically most products have language versions, and those need to be created and tested when a dot release or a patch is made. So, the estimate for the amount of time needed for these activities also need to be incorporated into the schedule.

Together, all of these can be woven into a schedule for the interim release. If it turns out that the deadline given is less than the projected schedule end, then you need to push back. For a small release, it is next to impossible to meet the project with the required quality principles, unless sufficient time is given. We once had a situation where a particular variation in the installer made a small error in the registry, which then prevented those who installed from being able to install another update, and there was a lot of backlash over that - we had to release another patch for that one.

Read more about the estimation done by a senior member of the development team.


Friday, June 28, 2013

Give advantages of frame relay over a leased phone line?

Frame relay and leased phone lines are two of the physical connection media for setting up the connections. 

Advantages of Frame Relay over Leased Phone Line
- Frame relay is a kind of the standardized WAN (wide area network) technology for specifying the logical link as well as physical link layers of the digital telecommunication channels. 
- It is done by the means of a packet switching methodology.
- The frame relay technology has been designed for transportation across the ISDN (integrated services digital network) infrastructure. 
- Today, it is used in the context of a number of network interfaces. 
- Frame relays are commonly implemented for VoFR (voice over frame relay).  - It is used as an encapsulation technique for the data. 
- The frame relays are used between the WANs and the LANS.
- A private line or a leased line is provided to the user that connects to the frame relay node. 
- The frequently changing path is transparent to the WAN protocols used extensively by the end users. 
- Data is transmitted via these networks and the frame relay network handles all this.
- One advantage of the frame relays over the leased lines is that they are less expensive and this is what that makes the frame relays so popular in the telecommunications industry.
- Another advantage of the frame relays over the leased lines that make them popular is that they have user equipment that can be configured with extreme simplicity in the frame relay network. 
- The usage of the Ethernet over the fiber optics communication is high. 
- This has led to using the frame relay protocol and encapsulation by the dedicated broadband services like DSL and cable modem, VPN, MPLS etc. 
- However, there are a number of rural regions in India where there is still an absence of the cable modem and DSL services.
- In such areas, the only option for the non-dial-up connection is the frame relay line of 64 Kbit/ s.
- Thus, it might be used by some retail chain to connect with the WAN of their corporate. 
- The aim of the designers of the frame relay is to offer a telecommunication service for transmitting the cost efficient data between the various end points in the WAN and the local area networks in an intermittent traffic. 
- The data is put in to units of variable sizes called the frames by the frame relay process. 
- The required error correction process is left up to the end points. 
- This error correction includes re-transmission of the data. 
- This increases the speed of the overall transmission of data. 
- A PVC or the permanent virtual circuit is provided by the network so that when a customer looks at a dedicated connection and not having to pay for leased line that is full time engaged. 
- The route by which each frame travels to the destined end point is figured out by the service provider and thus he decides the charges based up on the usage. 
- A level of the service quality can be selected by the enterprise. 
- The frames can be prioritized while the importance of the other frames is reduced. 
- The frame relay can run on systems such as the following:
Ø  Fractional T – 1
Ø  Full T – carrier
Ø  E – 1
Ø  Full E carrier
- A frame relay provides mid-range services between ATM (asynchronous transfer mode) and the ISDN operating at a speed of 128 Kbps. 
- Not only it provides the services, it also complements them. 
- The base of the frame relay technology is provided by the X.25 packet switching that has been designed for data transmission over the analog voice lines.



How to decide whether to add another language support to the software product

The time is long gone when you could have only English as the language in which you are planning to release your software. Over the past many decades, software products have seen an increasing amount of revenue coming in from releases made in different languages, to sell in different regions of the world. So, although sales in the United States may be still your highest source of revenue, an increasing amount of revenue comes in from sales in Japan, in Europe, in Latin and South America, and in the emerging regions of China and India (for the last 2, there is another challenge that you have to meet, namely about how to prevent piracy and have more legal sales of the product in these regions).
So, if you have an English only version of your product, what will happen is that you will sell it in the United States, Great Britain, Australia, (and other English speaking nations of the world), and to some extent, you will even sell the English language versions in some of the countries, selling it to those consumers who want the English language versions of the product. But, what about the Spanish speaking world, the Japanese speaking, the French speaking, and so on ? Well, in many of these regions, there will be also a reluctance to buy the product just because the company has not chosen to bring out a specific language release. So, it always makes more sense to release specific language versions of the product, and releasing with the same set of features as the English language version.
The process of creating multiple language versions of the software is much easier. What is basically required is to translate all the UI elements in the application (includes strings, error messages, text within images, and any other text that the user can see) The way that the software code is written makes it easy to extract all these UI elements, and send them off for translation. Once these are translated, these are then incorporated back into the software, and then tested to ensure that there is no functional issue, and no cases where the translation leads into text that is messy or otherwise not right.
If it is that easy, why not just translate the software into as many languages as required ? Well, the previous paragraph was an over-statement; the process is expensive both in terms of revenue and resources. The process is not so exact. It can happen (and this especially happens with Japanese, German and Russian) that the translated text is much larger than the English language version, and there needs to be effort spent to either increase the size where the text is to be fitted (and this would need to be done in all languages) or the text needs to be re-translated into something smaller. For large products, a complete translation, which also includes the thorough testing of the product, can run upto almost a million dollars.
So this why there needs to be thought about translation into a new language. Reasons for the same include:
- Marketing estimate of enhanced revenue
- Potential sales in that region over a period of time
- Negative image getting developed because of the lack of product in that language
- Need of a partner. In many cases, when doing deals with partners, the partner would want the product to be there in multiple languages, and if the language is not supported, the deal could be in danger


Thursday, June 27, 2013

What is the difference between a passive star and an active repeater in fiber optic network?

There are two important components of a fiber optic network namely passive star coupler and active repeaters. 

Passive Star in Fiber Optic Network
- Passive star couplers are single mode fiber optic couplers with reflective properties.  
- These couplers are used for optical local area networking at very high speeds. 
- These couplers are made from very simple components such as mirrors and 3 db couplers. 
- Besides this, these star couplers save a lot of optical fiber when compared to its trans-missive counterpart. 
- They are free of any multi-paths so as to avoid any interference. 
- A fiber optic network may consist of any number of passive star couplers and each of them is capable of connecting a number of users. 
- The input and output from every passive star coupler is given to the output and input of an active coupler. 
- The round trip transmission tile is stored by the active star coupler. 
- When it receives a signal from a passive star coupler, it stops the output to that coupler for the duration of the signal.
- It also inhibits the incoming data from all the other passive star couplers for the round trip transmission delay plus signal duration. 
- The purpose of a star coupler is to take one input signal and then splitting it in to a number of output signals. 
- In telecommunications industry and fiber optics communication, this coupler is used in network applications being a passive optical device. 
- If an input signal is introduced to one of the input ports, it is distributed to all of the output ports of the coupler. 
- As per the construction of the passive star coupler, the number of ports it will have is given by the power of 2. 
- For example, in a two port coupler or in a directional coupler or splitter, there are 2 input ports and 2 output ports.
- In a four port coupler, there are 4 i/p ports and 4 o/p ports and so on. 
- The digital equipment corporation also sold a device by the name of star coupler which was used for interconnecting the links and computers through coaxial cable instead of using optical fibers. 

Active Repeater in Fiber Optic Network 
- Active repeater is an important telecommunications device used for re transmitting the signal it receives to a higher level and with higher basically to the other side of an obstacle so that long distances can be covered. 
- Repeater is an electro-mechanical device that helps in regenerating the telegraphy signals. 
- It may be defined as an analog device for amplifying the input signal, reshaping it, re-timing it for re-transmission. 
- A re-generator is a repeater that can perform the re-timing operation. 
Repeaters just tend to amplify the physical signal without interpreting the data transmitted by the signal. 
- The 1st layer i.e., the physical layer is where the repeaters operate. 
Repeaters are employed for boosting the signals in optical fiber lines as well as in twisted pair and coaxial cables. 
- When a signal travels through a channel, it gets attenuated with the distance and time because of the energy loss (dielectric losses, conductor resistance etc.). 
- When light travels in optical fibers, it scattered and absorbed and hence is attenuated. 
- Therefore, in long fiber lines, repeaters are installed at proper intervals for regenerating and strengthening the signal. 
Repeater in optical communication performs the following functions:
Ø  Takes the input signal
Ø  Converts it in to electrical signal
Ø  Regenerates it.
Ø  Converts it in to optical signal
Ø  Re-transmits it

- These repeaters are usually employed in submarine as well as transcontinental communication cables as the loss is unacceptable in these cases.  


Wednesday, June 26, 2013

Encouraging a creative team member to assist in the UI designer duties

There are a certain number of resources that any particular software team can have assigned to them. Based on the amount of work that is under consideration of the team, the needs of the team are estimated in terms of the developers and testers vs. the amount of work done. If there is a lot of work and there are not enough developers or testers needed for this quantity of work, then the team has only 3 choices:
- Ask for and get more testers and/or developers, and take on this quantity of work
- Ask for and not get more testers and/or developers, and decline work beyond the amount that can be done with the team that you have
- The third one is the most problematic. The team does not get additional testers or developers, but there is a lot of pressure built to take on the additional work. One would not like to think of such a scenario, but it does happen, and eventually the team either gives up the additional work or takes on a lot of stress, and maybe even has a reduced quality in terms of their deliverable.
However, twice in the past 4 years, we have come across a situation which is not easily solvable, and for which we did not get any additional support. What was this case ? This was the case where the release we were doing had a number of features that required the support of a workflow designer / UI designer. In a typical release, we have a certain number of such resources assigned to the team, based on an expectation that the amount of workflow and UI required will be of a certain % (let us assume that 60% of the work being done by the team needs the support of the workflow / UI team - the reason for it being 60% is that the remaining 40% is where the team does some kind of tweaking / modification which does not require any workflow changes or UI changes).
However, this gets badly affected when there was a release where the estimation of the amount of work where the workflow / UI designer is needed was around 80%, and it was pretty clear that the team that was doing the workflow / UI design was not staffed for this extra work, and even if we had got allocation of budget for the extra work (which was not 100% certain by itself), it takes months to hire somebody with this skill set. Hence, there was no getting around the fact that we had a puzzle on our hands - we had estimated work for which we had enough developers and testers, we did not have enough designers. What to do ?
When we were discussing with the senior members of the team, we came across an interesting suggestion. Over the past, the team had noted that there were some members of the team who were more easily able to comprehend the designs put out by the designer team and understood the way that they were doing their reasoning. Given that we really did not have a choice, we went ahead with the open offer to team members who wanted to give open flow to their creative juices, and prepare the design, with rounds of review by the designer team (we found that this amount of effort could be accommodated), and then present to this team. One of the main persons we expected did not volunteer, but another person who was also seen a prospect volunteered, and we pulled her off her current responsibilities and got her temporarily assigned to the designer team. Over the next few weeks, we did a close watch on this arrangement, and while it was not as good as the design done by the designer team, our Product Manager was satisfied with this, and so was the design team, and we went with the design that she produced and the team accepted. Now, this was not a long term arrangement, but in the scenario described above, it seemed to work. 


What are advantages and disadvantages of having international standards for network protocols?

Without rules governing the exchange of data and establishment of connections between networks, there would have a big chaos in the industry. These rules are called line protocols and are very crucial for the inter-networking and communication across the networks. 
The mode and the speed of communication are controlled by what are called the communications software packages. These network protocols are defined by the international standards. However, having international standards for the protocols has got both advantages and disadvantages. 

- A number of standard network protocols for carrying out several functions such as routing, packetizing, and addressing and so on. 
- All these protocols lay out a standard definition of how the routing and addressing has to be done. 
- They also define the specifications for the structure of the packets to be transferred between different hosts. 
- Some commonly used routing protocols are:
Ø  X.25
Ø  IPX/ SPX
Ø  TCP/ IP
Ø  OSI

- The OSI stands for open systems interconnection. 
- Early networking systems had a big problem which was that there was a lack of consistency between the protocols employed by several types of different computers. 
- As a consequence of this problem, the international standards came in to the picture. 
- Thus, international standards were established for the various data transmission protocols. 
- For example, OSI is a set of standard protocols developed by ISO (international standards organization).
- In this model, the functionalities of the network protocols have been divided in to seven layers of the communication rules or protocols. 
- The purpose of this model is to identify the functions being offered by the system. 
- The following three layers appear in the host systems and other units such as processor and control unit etc.:
  1. Physical layer
  2. Data link layer
  3. Network layer
- The leftover layers are found in the host systems only. 

Advantages of having International Standards
Ø  If all the systems are following the same standard, it becomes easy for everyone to connection to everyone else. In other words, the international standards provide easy interconnectivity.
Ø  If any standard is widely used, it gains economies of scale. For example, VLSI chips etc.
Ø  With all the systems using the same standard, the installation and the maintenance of the connections become quite easy.
Ø  Software designed by the developers from all over the world, won’t have any problem in interfacing with the host system and the other software. They will work well with a wide range of operating systems and hardware since both are using the same standard.


Disadvantages of having International Standards
Ø  Poor standards may be formed as a result of the frequent standardization.
Ø Once the standard is adopted internationally, it’ll be difficult to make changes to it. It will be difficult to introduce new and better techniques in to it.
Ø  If a problem occurs, it has to be seen as an international problem.
Ø The manufacturers and companies will be bound to follow the same international standards and so they won’t be able to develop something better of their own.
Ø  Large multinational companies won’t be able to pool everyone in to using their proprietary protocols and therefore no huge profits.


TCP/ IP protocol was developed to make communication easy between the dissimilar systems. A number of hardware and software vendors support this protocol ranging from mainframes to microcomputers. A number of corporations, universities and govt. agencies are making use of this protocol. 


Tuesday, June 25, 2013

Working with people who are not so responsive as developers or testers ..

It can really test the patience of a team when they work with people who do not seem to follow the same guidelines, processes and schedules as the rest of the team does. It seems a bit odd though. The schedule is the most important item of a software development project, with a lot of effort going towards determining whether the team is following the schedule or not, and if there are any slippages from the schedule, it can imperil the success of a project. If there are slippages of the schedule, then it would take a lot of effort from the project manager and other managers and senior members of the team to get the project back on track. In such a case, it seems strange that there can be members of the extended team who are not so committed to the schedule (actually, it is not right to say that they are not committed to the project, it is just that it can be a challenge to get them to follow the details of the schedule).
Who are these members of the extended team ? Well, let me lay out a few candidates (and this does not mean that every one of these roles will not follow the details of the schedule, but I do know many people who were in such roles with whom it was a challenge to get them to follow the schedule) - these are typically the more creative people of the team - in some cases, the Product Manager; in more cases, the UI Designer or the Experience Designer; and even in other cases, some of the more senior members of the development team.
Does this really happen ? Well, yes, it happens like this all the time. Let me layout some examples - during the course of a project schedule, there is an initial time period when the requirements need to be defined by the Product Manager (with a certain detail inherent in these requirements, a level of requirements that is enough for the workflow designer and the development team to do a certain level of estimation), and more often than not, unless I, as the Program Manager would do a lot of reminder to the Product Manager, there would always be some amount of delay, or the requirements that were available were not of enough detail. As a result, by a couple of cycles, we had actually started giving a buffer so that after all the urging, there would be time to do a couple of cycle of the requirements. Given that the Product Manager is also typically a senior person, we did not try any other method of ensuring that they finished their work by the scheduled time; instead we added a buffer of around a week in the overall schedule.
The bigger problem was when we were dealing with the experience designer / UI designer. The interaction with this person was on a regular basis, for more than 70% of features (given that most of the features needed some kind of UI work, or needing some kind of workflow optimization). Hence, it was not only the overall schedule, but also the schedule for each feature that needed dates and details for work supposed to be done by the UI designer. The work done by the designer followed in a logical order to the requirements and was needed for the development team to do their work, and hence any delays in this work would cause a ripple effect all down the schedule. However, in a clear case of Murphy's Law, the chances of there being a delay from the designer end was high (not always, but in most cases, there would be something pending).
How do you ensure that the work done by the designer was on time ? Well, there are no clear ways (atleast nothing that was 100% successful), but here are some steps:
- Layout a clear schedule for when the delivery from the designer is expected, including dates for interim deliveries, review times, and final deliveries.
- If the designer does not agree with the dates that you would have started out with, and considers them too aggressive, then you need to do a discussion. Don't try to force any dates on the designer from your end, make sure that dates are negotiated.
- Setup a weekly telecon with the designer, and make sure that you are reminding them of the dates that are due, and also find out from them whether they are on track or not. If wildly out of track, then you might need to modify your dates to some degree, and make sure that the team knows about.
- If there is a manager from the designer end who is looking at this, then make sure that they are also in the loop for the work done.
- Finally cross your fingers and hope that this is not the schedule where the designer is going to delay their deliveries.


Explain about demand paging and page replacements

These are two very important concepts of memory management strategies in the computer operating systems namely demand paging and paging replacements. 

About Demand Paging
- Demand paging is just the opposite concept of the anticipatory paging. 
Demand paging is actually a memory management strategy developed for managing the virtual memory.
- The operating system that makes use of demand paging technique, a copy of the disk page is made and kept in the physical memory whenever a request is made for it i.e., whenever a page fault occurs. 
- It is obvious that the execution of a process starts with none of its page loaded in to the main memory and follows by a number of page faults occurring one after the other until all of its required pages have been loaded in to the main memory. 
- Demand paging comes under the category of the lazy loading techniques. 
This strategy follows that only if the process in execution demands a page, then only it should be brought in to the main memory. 
- That’s why the strategy has been named as demand paging. Sometimes it is even called as the lazy evaluation. 
- Page table implementation is required for using the demand paging technique.
- The purpose of this table is to map the physical memory to the logical memory. 
- This table uses a bit wise operator for marking a page as valid or invalid. 

The following steps are carried out whenever a process demands for a page:
  1. An attempt is made for accessing the page.
  2. If page is present in the memory the usual instructions are followed.
  3. If page is not there i.e., is invalid then a page fault is generated.
  4. Memory reference to a location in the virtual memory is checked if it is valid or not. If it’s an illegal memory access then the process is terminated. If not the requested page has to be paged in.
  5. The disk operations are scheduled for reading the requested page in to the physical memory.
  6. Restarting the instruction that raised the page fault trap.
- The nature of this strategy is itself of great advantage. 
- Upon availability of more space in the physical memory, it allows execution of many processes leading to a decrease in the context switching time.
- At the time of program start up, less latency occurs during loading. 
- This is because the inflow and outflow of the data between main memory and secondary memory is very less.


About Page Replacement
- When less number of real memory frames is available, it leads to invoking a page stealer. 
- This stealer searches through the PFT (page frame table) for pages to steal. 
This table stores references to the pages which are required and modified. 
- If the requested page is found by the page stealer, it does not steal it but the reference flag is reset for that page. 
- So in the pass when the page stealer comes across this page, it steals this page. 
- Note that in this pass the page was flagged as un-referenced. 
- Any change made to the page is indicated by means of the modify flag.
- If the modify flag of the page to be stolen is set, then a page out call has to be made before the page stealer does its work. 
- Thus, the pages that form a part of the currently executing segments are written to so called paging space and the persisting segments are in turn written to the disk. 
- The page replacement is carried by the algorithms called the page replacement algorithms. 
- Besides this, these also keep a track of the faults. 


Facebook activity