Common name for several types of sales where the price is neither set nor arrived at by negotiation, but is discovered through the process of competitive and open bidding. The two major types of auction are :
(1) Forward auction in which several buyers bid for one seller's good(s)
(2) Reverse auction in which several sellers bid for one buyer's order.
An auction is complete (and a binding contract is created) when a bid is accepted by the seller or the buyer (as the case may be). The internet age has transformed auction into a truly open process in which thousands of goods (from books to ships) and services (from air travel to legal advice) may be offered for bidding by anyone from anywhere and at any time on websites such as eBay.com. Internet auctions are an important aspect of electronic commerce.
The advantages are :
Reduction in Marketing Time: From the time you make the decision to employ our services until auction day is usually 30 to 45 days, depending upon the property. Even in good markets, this compares favorably with traditional sales methods. In good markets and low markets, auctions allow you to get your property sold quickly and efficiently.
Reduction in Costs of Ownership: Once the decision is made to sell, the quicker the process can be accomplished the better. Typical costs associated with the ownership of real property can be drastically reduced such as maintenance, insurance, taxes, and interest. Many times, over the course of traditional listings, market conditions change affecting the price a willing and able purchaser will pay. Even in good markets, interest rates fluctuate, thereby causing a risk in the marketplace, over which you have not control.
Targeting Qualified Buyers: Most of the time, an auction finds success because the company you have chosen understands the best means to reach a targeted audience. Marketing your property to those incapable of making the decision to purchase or lacking the ability to purchase, is a genuine waste of time and money. At the same time, your auction cannot be kept a secret, advertising is essential to produce results. Each auction marketing campaign is carefully planned and thought out to insure all possible prospects become aware of your property and the opportunity afforded by the auction scheduled.
Reduction in Marketing Time & Absorption: We have successfully offered and sold many properties for lenders, developers, corporations, and others with a need or desire to liquidate communities. This can include developed subdivision lots, completed residential units, acreage, or nearly any other type of real estate. At auctions, it is possible to accomplish in one day what may have taken months or years to sell out using traditional methods. Auction marketing is compacted into a well-defined time frame. This reduces holding costs, interest charges, expensive marketing campaigns, and risks associated with the marketplace.
Negotiations Between Sellers and Buyers Defined: At an auction, all negotiations between sellers and bidders are carried out while the auctioneer is at the podium. Offers to purchase are made under predetermined conditions which bidders have agreed to comply with. This eliminates the need for complicated negotiations, offers, counter offers, and confusion. Through the process of competitive bidding, final prices are established.
Predetermined Terms and Conditions of Sale: At an auction, the terms of sale are determined in advance of the event. In most cases, your property will be offered in "As-Is" condition requiring interested parties to conclude any inspections they deem necessary before the first bid is made. Upon the conclusion of the auction, the top bidder will be required to execute a simple contract of sale requiring a non refundable deposit of usually 10% of the purchase price. In most cases, the contract will require closing 30 to 45 days following the auction. While many purchasers obtain financing on properties purchased at auctions, the contract is not contingent upon the ability to obtain financing. Many purchasing at auctions are pre-qualified by mortgage lenders or have the ability to close without financing. This allows you to leave the auction knowing your property has sold.
Achievement of Fair Market Value: Through the process of competitive bidding, you can be assured your property brings a fair price. This is contrasted with traditional listings as the final price is achieved as qualified bidders react to your property. At an auction, you are able to observe what others are willing to pay for your property. It is one thing to obtain a current appraisal on your property or hear what brokers think your property is worth, yet these are really not that important. The price a willing, qualified purchaser is willing to pay on a date certain is really what matters. This is the price achieved at professionally conducted auctions and favorable compares to the written definition of current market value. Sometimes auctions are equated to negative financial events and, certainly, some properties typically offered are troubled. Many times, properties become stale having been on the market for long periods of time. Sometimes, they become shopped out, passed from one broker to another without a sale. Some bidders attend auctions expecting to pay "pennies on the dollar." The reality is most properties, when offered by professional auction companies, bring prices that are fair to the seller and purchaser.
Selling on Your Time: In most traditional listing agreements, there is no time frame stated for the sale of your property. It could take as little as one day or as long as the listing lasts. At auction, you set the date and time the property will be sold. Auction advertising and promotion is designed to attract the attention of qualified bidders who must act quickly as there is an established date and time for the event. This sends a powerful message - you are motivated and committed to selling.
The reason 4 me 2 choose tis title is becoz of one of my frien has suggested this title 4 me..n thanks 4 her ....she thinks that tis title would be appropriate 4 me becoz she acknowledge that i luv play with numbers...hehe...i`m also wondering about this statements...anyway she is my friend n i knew that she understand myself better than me....tq...markonah...luv u...n also tis blog will be dedicated to my IT subject`s
Tuesday, November 29, 2011
Saturday, November 26, 2011
B2B ( Business-to-Business) ^....^ ~
On the Internet, B2B (business-to-business), also known as e-biz, is the exchange of products, services, or information between businesses rather than between businesses and consumers. Although early interest centered on the growth of retailing on the Internet (sometimes called e-tailing), forecasts are that B2B revenue will far exceed business-to-consumers (B2C) revenue in the near future. According to studies published in early 2000, the money volume of B2B exceeds that of e-tailing by 10 to 1. Over the next five years, B2B is expected to have a compound annual growth of 41%. The Gartner Group estimates B2B revenue worldwide to be $7.29 trillion dollars by 2004. In early 2000, the volume of investment in B2B by venture capitalists was reported to be accelerating sharply although profitable B2B sites were not yet easy to find.
B2B Web sites can be sorted into:
Company Web sites, since the target audience for many company Web sites is other companies and their employees. Company sites can be thought of as round-the-clock mini-trade exhibits.
Web site serves as the entrance to an exclusive extranet available only to customers or registered site users. Some company Web sites sell directly from the site, effectively e-tailing to other businesses.
Product supply and procurement exchanges, where a company purchasing agent can shop for supplies from vendors, request proposals, and, in some cases, bid to make a purchase at a desired price. Sometimes referred to as e-procurement sites, some serve a range of industries and others focus on a niche market.
Specialized or vertical industry portals which provide a "subWeb" of information, product listings, discussion groups, and other features. These vertical portal sites have a broader purpose than the procurement sites (although they may also support buying and selling).
Brokering sites that act as an intermediary between someone wanting a product or service and potential providers. Equipment leasing is an example.
Information sites (sometimes known as infomediary), which provide information about a particular industry for its companies and their employees. These include specialized search sites and trade and industry standards organization sites.
Many B2B sites may seem to fall into more than one of these groups. Models for B2B sites are still evolving.
Another type of B2B enterprise is software for building B2B Web sites, including site building tools and templates, database, and methodologies as well as transaction software.
B2B is e-commerce between businesses. An earlier and much more limited kind of online B2B prior to the Internet was Electronic Data Interchange (EDI), which is still widely used.
B2B Web sites can be sorted into:
Company Web sites, since the target audience for many company Web sites is other companies and their employees. Company sites can be thought of as round-the-clock mini-trade exhibits.
Web site serves as the entrance to an exclusive extranet available only to customers or registered site users. Some company Web sites sell directly from the site, effectively e-tailing to other businesses.
Product supply and procurement exchanges, where a company purchasing agent can shop for supplies from vendors, request proposals, and, in some cases, bid to make a purchase at a desired price. Sometimes referred to as e-procurement sites, some serve a range of industries and others focus on a niche market.
Specialized or vertical industry portals which provide a "subWeb" of information, product listings, discussion groups, and other features. These vertical portal sites have a broader purpose than the procurement sites (although they may also support buying and selling).
Brokering sites that act as an intermediary between someone wanting a product or service and potential providers. Equipment leasing is an example.
Information sites (sometimes known as infomediary), which provide information about a particular industry for its companies and their employees. These include specialized search sites and trade and industry standards organization sites.
Many B2B sites may seem to fall into more than one of these groups. Models for B2B sites are still evolving.
Another type of B2B enterprise is software for building B2B Web sites, including site building tools and templates, database, and methodologies as well as transaction software.
B2B is e-commerce between businesses. An earlier and much more limited kind of online B2B prior to the Internet was Electronic Data Interchange (EDI), which is still widely used.
BRICK and MORTAR / BRICK and CLICKS
What Does Brick and Mortar Mean?
A traditional "street-side" business that deals with its customers face to face in an office or store that the business owns or rents. The local grocery store and the corner bank are examples of "brick and mortar" companies. Brick and mortar businesses can find it difficult to compete with web-based businesses because the latter usually have lower operating costs and greater flexibility.
What Does Brick and Click Mean?
It refers to businesses that offer online services via the Web as well as the traditional retail outlets (offline) staffed by people. Coined in 1999 by David Pottruck, co-CEO of the Charles Schwab brokerage firm, it refers to running the two divisions in a cooperative and integrated manner where they both support and benefit from each other.
A traditional "street-side" business that deals with its customers face to face in an office or store that the business owns or rents. The local grocery store and the corner bank are examples of "brick and mortar" companies. Brick and mortar businesses can find it difficult to compete with web-based businesses because the latter usually have lower operating costs and greater flexibility.
What Does Brick and Click Mean?
It refers to businesses that offer online services via the Web as well as the traditional retail outlets (offline) staffed by people. Coined in 1999 by David Pottruck, co-CEO of the Charles Schwab brokerage firm, it refers to running the two divisions in a cooperative and integrated manner where they both support and benefit from each other.
Thursday, November 24, 2011
COMPUTER VIRUS.....
A computer virus is a small software program that spreads from one computer to another computer and that interferes with computer operation. A computer virus may corrupt or delete data on a computer, use an e-mail program to spread the virus to other computers, or even delete everything on the hard disk.
Computer viruses are most easily spread by attachments in e-mail messages or by instant messaging messages. Therefore, you must never open an e-mail attachment unless you know who sent the message or unless you are expecting the e-mail attachment. Computer viruses can be disguised as attachments of funny images, greeting cards, or audio and video files. Computer viruses also spread by using downloads on the Internet. Computer viruses can be hidden in pirated software or in other files or programs that you may download. Symptoms that may be the result of ordinary Windows functions.A computer virus infection may cause the following problems:
Note: These problems may also occur because of ordinary Windows functions or problems in Windows that are not caused by a computer virus.
Windows does not start even though you have not made any system changes or even though you have not installed or removed any programs.Windows does not start because certain important system files are missing. Additionally, you receive an error message that lists the missing files.
The computer sometimes starts as expected. However, at other times, the computer stops responding before the desktop icons and the taskbar appear.The computer runs very slowly. Additionally, the computer takes longer than expected to start.You receive out-of-memory error messages even though the computer has sufficient RAM.New programs are installed incorrectly.
Windows spontaneously restarts unexpectedly.Programs that used to run stop responding frequently. Even if you remove and reinstall the programs, the issue continues to occur.A disk utility such as Scandisk reports multiple serious disk errors.A partition disappears.The computer always stops responding when you try to use Microsoft Office products.You cannot start Windows Task Manager.Antivirus software indicates that a computer virus is present.
Symptoms of A Computer Virus
If you suspect or confirm that your computer is infected with a computer virus, obtain the current antivirus software. The following are some primary indicators that a computer may be infected:
The computer runs slower than usual.
The computer stops responding, or it locks up frequently.
The computer crashes, and then it restarts every few minutes.
The computer restarts on its own. Additionally, the computer does not run as usual.
Applications on the computer do not work correctly.
Disks or disk drives are inaccessible.
You cannot print items correctly.
You see unusual error messages.
You see distorted menus and dialog boxes.
There is a double extension on an attachment that you recently opened, such as a .jpg, .vbs, .gif, or .exe. extension.An antivirus program is disabled for no reason. Additionally, the antivirus program cannot be restarted.An antivirus program cannot be installed on the computer, or the antivirus program will not run.New icons appear on the desktop that you did not put there, or the icons are not associated with any recently installed programs.
Strange sounds or music plays from the speakers unexpectedly.
A program disappears from the computer even though you did not intentionally remove the program.
Note These are common signs of infection. However, these signs may also be caused by hardware or software problems that have nothing to do with a computer virus. Unless you run the Microsoft Malicious Software Removal Tool, and then you install industry-standard, up-to-date antivirus software on your computer, you cannot be certain whether a computer is infected with a computer virus or not.
Symptoms of Worms and Trojan Horse Viruses in E-Mail Messages
When a computer virus infects e-mail messages or infects other files on a computer, you may notice the following symptoms:The infected file may make copies of itself. This behavior may use up all the free space on the hard disk.
A copy of the infected file may be sent to all the addresses in an e-mail address list.
The computer virus may reformat the hard disk. This behavior will delete files and programs.
The computer virus may install hidden programs, such as pirated software. This pirated software may then be distributed and sold from the computer.The computer virus may reduce security. This could enable intruders to remotely access the computer or the network.
You receive an e-mail message that has a strange attachment. When you open the attachment, dialog boxes appear, or a sudden degradation in system performance occurs.Someone tells you that they have recently received e-mail messages from you that contained attached files that you did not send. The files that are attached to the e-mail messages have extensions such as .exe, .bat, .scr, and .vbs extensions.
What is Spyware?
Spyware can install on your computer without your knowledge. These programs can change your computer’s configuration or collect advertising data and personal information. Spyware can track internet searching habits and possibly redirect web site activity.
Symptoms of Spyware
When a computer becomes affected by Spyware, the following may result:
Slow internet connection.
Changing your web browser’s home page.
Loss of internet connectivity.
Failure to open some programs, including security software.
Unable to visit specific websites, which may include redirecting you to another one.
How to remove a computer virus and spyware.
Even for an expert, removing a computer virus or spyware can be a difficult task without the help of computer malicious software removal tools. Some computer viruses and other unwanted softwarereinstall themselves after the viruses and spyware have been detected and removed. Fortunately, by updating the computer and by using malicious software removal tools, you can help permanently remove unwanted software.
To remove a computer virus and other malicious software, follow these steps:
Install the latest updates from Microsoft Update:
For Windows Vista and Windows 7:
Click the Pearl (Start) button, then type Windows Update in the search box.
In the results area, click Windows Update.
Click Check for Updates.
Follow the instructions to download and install the latest Windows Updates.
For Windows XP:
Click Start, then click Run.
Type sysdm.cpl and press the Enter key.
Click the Automatic Updates tab and choose the Automatic (recommended) option.
Click OK.
Use the Microsoft Safety Scanner
Microsoft offers a free online tool that will scan and remove potential threats from your computer. To perform the scan, visit: http://www.microsoft.com/security/scanner/
Install and run Microsoft Security Essentials
Microsoft offers a free malicious removal program; Microsoft Security Essentials that will also help protect your computer from being infected. To install Microsoft Security Essentials, follow the steps below:
Go to the Microsoft Security Essentials website at: http://windows.microsoft.com/en-US/windows/products/security-essentials
Click Download Now.
If your browser prompts you to save or run the file, click Run.
Follow the steps to install Microsoft Security Essentials.
After installation, restart your computer and open Microsoft Security Essentials.
On the Home tab, choose the Full scan option, and then click Scan now.
For more information about how to remove a computer virus, visit the following Microsoft Web site: http://www.microsoft.com/protect/computer/viruses/remove.mspx
How to Protect Your Computer Against Viruses
To protect your computer against viruses, follow these steps:
Turn on the firewall.
For information on how to turn on your firewall with Windows XP, visit: http://support.microsoft.com/kb/283673
For information on how to turn on your firewall with Windows Vista, visit: http://windows.microsoft.com/en-US/windows-vista/Turn-Windows-Firewall-on-or-off
For information on how to turn on your firewall with Windows 7, visit: http://windows.microsoft.com/en-US/windows7/Turn-Windows-Firewall-on-or-off
Keep your computer up-to-date.
For information on how to set Automatic Updates in Windows, visit: http://support.microsoft.com/kb/306525
Install Microsoft Security Essentials and keep it up to date.
For more information on how to install and use Microsoft Security Essentials, visit: http://windows.microsoft.com/en-US/windows/products/security-essentials
For more information about how to protect a computer against viruses, visit the following Microsoft Web site:
http://www.microsoft.com/protect/computer/default.mspx
What are Rogue Virus Alerts?
Rogue security software programs will try to make you think that your machine is infected by a virus and usually prompt you to download and/or buy a removal product. The names of these products usually contain words like “Antivirus,” “Shield,” “Security,” Protection,” “Fixer,” so they appear to be legitimate. They will often run immediately when downloaded, or the next time your computer starts up. Rogue security software can prevent applications from opening, including Internet Explorer, and may display legitimate and very important Windows files as infections. Some typical error messages or pop ups you may receive may contain:
Warning!
Your computer is infected!
This computer is infected by spyware and adware.
A good sign that the software is not beneficial to you is that when you try to close the window it will continually pop up warnings similar to:
Are you sure you want to navigate from this page?
Your computer is infected! They can cause data lost and file corruption and need to be treated as soon as possible. Press CANCEL to prevent it. Return to System Security and download it to secure your PC.
Press OK to Continue or Cancel to stay on the current page.
It is strongly recommended that you don't download or purchase any kind of software that advertises in this manner.
Computer viruses are most easily spread by attachments in e-mail messages or by instant messaging messages. Therefore, you must never open an e-mail attachment unless you know who sent the message or unless you are expecting the e-mail attachment. Computer viruses can be disguised as attachments of funny images, greeting cards, or audio and video files. Computer viruses also spread by using downloads on the Internet. Computer viruses can be hidden in pirated software or in other files or programs that you may download. Symptoms that may be the result of ordinary Windows functions.A computer virus infection may cause the following problems:
Note: These problems may also occur because of ordinary Windows functions or problems in Windows that are not caused by a computer virus.
Windows does not start even though you have not made any system changes or even though you have not installed or removed any programs.Windows does not start because certain important system files are missing. Additionally, you receive an error message that lists the missing files.
The computer sometimes starts as expected. However, at other times, the computer stops responding before the desktop icons and the taskbar appear.The computer runs very slowly. Additionally, the computer takes longer than expected to start.You receive out-of-memory error messages even though the computer has sufficient RAM.New programs are installed incorrectly.
Windows spontaneously restarts unexpectedly.Programs that used to run stop responding frequently. Even if you remove and reinstall the programs, the issue continues to occur.A disk utility such as Scandisk reports multiple serious disk errors.A partition disappears.The computer always stops responding when you try to use Microsoft Office products.You cannot start Windows Task Manager.Antivirus software indicates that a computer virus is present.
Symptoms of A Computer Virus
If you suspect or confirm that your computer is infected with a computer virus, obtain the current antivirus software. The following are some primary indicators that a computer may be infected:
The computer runs slower than usual.
The computer stops responding, or it locks up frequently.
The computer crashes, and then it restarts every few minutes.
The computer restarts on its own. Additionally, the computer does not run as usual.
Applications on the computer do not work correctly.
Disks or disk drives are inaccessible.
You cannot print items correctly.
You see unusual error messages.
You see distorted menus and dialog boxes.
There is a double extension on an attachment that you recently opened, such as a .jpg, .vbs, .gif, or .exe. extension.An antivirus program is disabled for no reason. Additionally, the antivirus program cannot be restarted.An antivirus program cannot be installed on the computer, or the antivirus program will not run.New icons appear on the desktop that you did not put there, or the icons are not associated with any recently installed programs.
Strange sounds or music plays from the speakers unexpectedly.
A program disappears from the computer even though you did not intentionally remove the program.
Note These are common signs of infection. However, these signs may also be caused by hardware or software problems that have nothing to do with a computer virus. Unless you run the Microsoft Malicious Software Removal Tool, and then you install industry-standard, up-to-date antivirus software on your computer, you cannot be certain whether a computer is infected with a computer virus or not.
Symptoms of Worms and Trojan Horse Viruses in E-Mail Messages
When a computer virus infects e-mail messages or infects other files on a computer, you may notice the following symptoms:The infected file may make copies of itself. This behavior may use up all the free space on the hard disk.
A copy of the infected file may be sent to all the addresses in an e-mail address list.
The computer virus may reformat the hard disk. This behavior will delete files and programs.
The computer virus may install hidden programs, such as pirated software. This pirated software may then be distributed and sold from the computer.The computer virus may reduce security. This could enable intruders to remotely access the computer or the network.
You receive an e-mail message that has a strange attachment. When you open the attachment, dialog boxes appear, or a sudden degradation in system performance occurs.Someone tells you that they have recently received e-mail messages from you that contained attached files that you did not send. The files that are attached to the e-mail messages have extensions such as .exe, .bat, .scr, and .vbs extensions.
What is Spyware?
Spyware can install on your computer without your knowledge. These programs can change your computer’s configuration or collect advertising data and personal information. Spyware can track internet searching habits and possibly redirect web site activity.
Symptoms of Spyware
When a computer becomes affected by Spyware, the following may result:
Slow internet connection.
Changing your web browser’s home page.
Loss of internet connectivity.
Failure to open some programs, including security software.
Unable to visit specific websites, which may include redirecting you to another one.
How to remove a computer virus and spyware.
Even for an expert, removing a computer virus or spyware can be a difficult task without the help of computer malicious software removal tools. Some computer viruses and other unwanted softwarereinstall themselves after the viruses and spyware have been detected and removed. Fortunately, by updating the computer and by using malicious software removal tools, you can help permanently remove unwanted software.
To remove a computer virus and other malicious software, follow these steps:
Install the latest updates from Microsoft Update:
For Windows Vista and Windows 7:
Click the Pearl (Start) button, then type Windows Update in the search box.
In the results area, click Windows Update.
Click Check for Updates.
Follow the instructions to download and install the latest Windows Updates.
For Windows XP:
Click Start, then click Run.
Type sysdm.cpl and press the Enter key.
Click the Automatic Updates tab and choose the Automatic (recommended) option.
Click OK.
Use the Microsoft Safety Scanner
Microsoft offers a free online tool that will scan and remove potential threats from your computer. To perform the scan, visit: http://www.microsoft.com/security/scanner/
Install and run Microsoft Security Essentials
Microsoft offers a free malicious removal program; Microsoft Security Essentials that will also help protect your computer from being infected. To install Microsoft Security Essentials, follow the steps below:
Go to the Microsoft Security Essentials website at: http://windows.microsoft.com/en-US/windows/products/security-essentials
Click Download Now.
If your browser prompts you to save or run the file, click Run.
Follow the steps to install Microsoft Security Essentials.
After installation, restart your computer and open Microsoft Security Essentials.
On the Home tab, choose the Full scan option, and then click Scan now.
For more information about how to remove a computer virus, visit the following Microsoft Web site: http://www.microsoft.com/protect/computer/viruses/remove.mspx
How to Protect Your Computer Against Viruses
To protect your computer against viruses, follow these steps:
Turn on the firewall.
For information on how to turn on your firewall with Windows XP, visit: http://support.microsoft.com/kb/283673
For information on how to turn on your firewall with Windows Vista, visit: http://windows.microsoft.com/en-US/windows-vista/Turn-Windows-Firewall-on-or-off
For information on how to turn on your firewall with Windows 7, visit: http://windows.microsoft.com/en-US/windows7/Turn-Windows-Firewall-on-or-off
Keep your computer up-to-date.
For information on how to set Automatic Updates in Windows, visit: http://support.microsoft.com/kb/306525
Install Microsoft Security Essentials and keep it up to date.
For more information on how to install and use Microsoft Security Essentials, visit: http://windows.microsoft.com/en-US/windows/products/security-essentials
For more information about how to protect a computer against viruses, visit the following Microsoft Web site:
http://www.microsoft.com/protect/computer/default.mspx
What are Rogue Virus Alerts?
Rogue security software programs will try to make you think that your machine is infected by a virus and usually prompt you to download and/or buy a removal product. The names of these products usually contain words like “Antivirus,” “Shield,” “Security,” Protection,” “Fixer,” so they appear to be legitimate. They will often run immediately when downloaded, or the next time your computer starts up. Rogue security software can prevent applications from opening, including Internet Explorer, and may display legitimate and very important Windows files as infections. Some typical error messages or pop ups you may receive may contain:
Warning!
Your computer is infected!
This computer is infected by spyware and adware.
A good sign that the software is not beneficial to you is that when you try to close the window it will continually pop up warnings similar to:
Are you sure you want to navigate from this page?
Your computer is infected! They can cause data lost and file corruption and need to be treated as soon as possible. Press CANCEL to prevent it. Return to System Security and download it to secure your PC.
Press OK to Continue or Cancel to stay on the current page.
It is strongly recommended that you don't download or purchase any kind of software that advertises in this manner.
Tuesday, November 22, 2011
SUPERCOMPUTER.
A supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation.
Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion).
Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".
Today, supercomputers are typically one-of-a-kind custom designs produced by traditional companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. Currently, Japan's K computer, built by Fujitsu in Kobe, Japan is the fastest in the world. It is three times faster than previous one to hold that title, the Tianhe-1A supercomputer located in China.
The term supercomputer itself is rather fluid, and the speed of earlier "supercomputers" tends to become typical of future ordinary computers. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard.
Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs (see Transputer by instance). Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs, IBM Cell, FPGAs. Most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.
Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion).
Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".
Today, supercomputers are typically one-of-a-kind custom designs produced by traditional companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. Currently, Japan's K computer, built by Fujitsu in Kobe, Japan is the fastest in the world. It is three times faster than previous one to hold that title, the Tianhe-1A supercomputer located in China.
The term supercomputer itself is rather fluid, and the speed of earlier "supercomputers" tends to become typical of future ordinary computers. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard.
Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs (see Transputer by instance). Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs, IBM Cell, FPGAs. Most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.
Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
Sunday, November 20, 2011
ENTERPRISE APPLICATION INTEGRATION.....
Definition: Enterprise Application Integration is the term used to describe the integration of the computer applications of an enterprise so as to maximise their utility throughout the enterprise.
In today’s business environment it has become essential for enterprises to make extensive use of computer systems and applications in order to establish and maintain a competitive advantage.
However, if these applications and systems are to provide the desired advantage it is imperative to ensure that their resources are available to all users and business processes that may benefit from their use.
Unfortunately, all too often these applications are not fully integrated within an organisation, preventing the seamless flow of information throughout the enterprise and forming ‘information silos’, or pooling of information resources.
Connectivity Problems
The integration problems many enterprises face today are due to the fact that until relatively recently there was no expectation that applications should be able to ‘talk’ to each other. Until the advent of networks, computer applications were designed to perform a specific purpose, and were often written in a range of different programming languages and used different data structures than each other, with no thought to integration.
Today, however, we expect all of our IT applications to speak the same language. Many vital business processes rely on access to data stored in a wide range of systems, so it is essential that they should be able to seamlessly share data in order to streamline workflow.
Ideally, enterprises would choose to start afresh, implementing an entirely new IT infrastructure designed with integration in mind. Unfortunately, most enterprises find this option prohibitively expensive and disruptive to the business, so they have no choice but to remain reliant on the old, out of date legacy systems.
The efficiency problems this can cause should not be underestimated. An enterprise running 10 separate applications requires 45 point-to-point connections in order to achieve integration. A larger enterprise running 50 applications would require 1,225 connections – which would become a clear hindrance to efficiency.
The challenge, therefore, is to find a technical solution to the problems that arise from application incompatibility.
Applications of EAI Software
There are many types of EAI software on the market (such as Sun Microsystems SeeBeyond), each approaching the problem of integration from a different angle and presenting a different solution. However, there are four overarching purposes for which EAI software can be used to improve efficiency:
Data Integration
EAI software often comes with built in application programming interfaces (APIs) by which it can effectively communicate with otherwise incompatible legacy systems, eliminating the need for multiple point-to-point connections between applications.
Data integration software works by providing homogenous data representations or access points to a range of disparate data sources. By providing a ‘front end’ tool by which users can access data from many different databases, the software can greatly increase the efficiency of business processes that rely on these disparate databases.
Process Integration
Only by making resources available to every process and user within an enterprise will the full benefit be extracted from computer systems. Unfortunately, the development of department specific systems has encouraged ‘islands of automation’ in many enterprises, where applications become isolated and are available only to a small portion of the enterprise.
EAI software offers the opportunity to bridge the gap between these applications. Whereas data integration standardises data across an enterprise, process integration standardises access to technology and resources.
Vendor Independence
EAI software is designed to allow for the future integration of new applications. By extracting rules and business policies from current data and applications and implementing them in the EAI system, it becomes possible to apply these rules to new applications added in the future with little disruption.
Common Façade
Perhaps most visibly, many EAI software packages offer the option of a complete front-end solution. There are many benefits to be found in providing a single access interface to the information systems of an enterprise. Primarily, a single access point can help reduce the complexity of many business processes within an enterprise. Additionally, a single interface will remove the necessity of training users to operate a range of different applications. Instead, a small measure of basic training can be sufficient to allow users to operate the EAI interface proficiently.
Further information regarding Enterprise Application Integration can be found at the Integration Consortium and the Intelligent Enterprise Magazine.
In today’s business environment it has become essential for enterprises to make extensive use of computer systems and applications in order to establish and maintain a competitive advantage.
However, if these applications and systems are to provide the desired advantage it is imperative to ensure that their resources are available to all users and business processes that may benefit from their use.
Unfortunately, all too often these applications are not fully integrated within an organisation, preventing the seamless flow of information throughout the enterprise and forming ‘information silos’, or pooling of information resources.
Connectivity Problems
The integration problems many enterprises face today are due to the fact that until relatively recently there was no expectation that applications should be able to ‘talk’ to each other. Until the advent of networks, computer applications were designed to perform a specific purpose, and were often written in a range of different programming languages and used different data structures than each other, with no thought to integration.
Today, however, we expect all of our IT applications to speak the same language. Many vital business processes rely on access to data stored in a wide range of systems, so it is essential that they should be able to seamlessly share data in order to streamline workflow.
Ideally, enterprises would choose to start afresh, implementing an entirely new IT infrastructure designed with integration in mind. Unfortunately, most enterprises find this option prohibitively expensive and disruptive to the business, so they have no choice but to remain reliant on the old, out of date legacy systems.
The efficiency problems this can cause should not be underestimated. An enterprise running 10 separate applications requires 45 point-to-point connections in order to achieve integration. A larger enterprise running 50 applications would require 1,225 connections – which would become a clear hindrance to efficiency.
The challenge, therefore, is to find a technical solution to the problems that arise from application incompatibility.
Applications of EAI Software
There are many types of EAI software on the market (such as Sun Microsystems SeeBeyond), each approaching the problem of integration from a different angle and presenting a different solution. However, there are four overarching purposes for which EAI software can be used to improve efficiency:
Data Integration
EAI software often comes with built in application programming interfaces (APIs) by which it can effectively communicate with otherwise incompatible legacy systems, eliminating the need for multiple point-to-point connections between applications.
Data integration software works by providing homogenous data representations or access points to a range of disparate data sources. By providing a ‘front end’ tool by which users can access data from many different databases, the software can greatly increase the efficiency of business processes that rely on these disparate databases.
Process Integration
Only by making resources available to every process and user within an enterprise will the full benefit be extracted from computer systems. Unfortunately, the development of department specific systems has encouraged ‘islands of automation’ in many enterprises, where applications become isolated and are available only to a small portion of the enterprise.
EAI software offers the opportunity to bridge the gap between these applications. Whereas data integration standardises data across an enterprise, process integration standardises access to technology and resources.
Vendor Independence
EAI software is designed to allow for the future integration of new applications. By extracting rules and business policies from current data and applications and implementing them in the EAI system, it becomes possible to apply these rules to new applications added in the future with little disruption.
Common Façade
Perhaps most visibly, many EAI software packages offer the option of a complete front-end solution. There are many benefits to be found in providing a single access interface to the information systems of an enterprise. Primarily, a single access point can help reduce the complexity of many business processes within an enterprise. Additionally, a single interface will remove the necessity of training users to operate a range of different applications. Instead, a small measure of basic training can be sufficient to allow users to operate the EAI interface proficiently.
Further information regarding Enterprise Application Integration can be found at the Integration Consortium and the Intelligent Enterprise Magazine.
AT-10-TION.....MIS SHA RUBY`S MESSAGE`S.....
Salam..mengikut jadual setakat nie, final subjek STID1013 akan dijalankan pada:-
ISNIN 09/01/12 8.30mlm STID1013 Sistem Maklumat Dalam Organisasi
harap sgt ape yg telah anda dgr,lihat semasa bengkel td dapat anda manfaatkan utk exam nanti...anda yang menentukan masa depan anda semua...jika anda semua ingin berjaya,hargailah peluang yg ada dpn mata...jgn menyesal kemudian hari...
Rajin2lah dtg kelas ye...Rajin2lah explore coz IT ini tidak terletak hnya dlm buku/nota sahaja...IT berada disekeliling anda....
tq~
ISNIN 09/01/12 8.30mlm STID1013 Sistem Maklumat Dalam Organisasi
harap sgt ape yg telah anda dgr,lihat semasa bengkel td dapat anda manfaatkan utk exam nanti...anda yang menentukan masa depan anda semua...jika anda semua ingin berjaya,hargailah peluang yg ada dpn mata...jgn menyesal kemudian hari...
Rajin2lah dtg kelas ye...Rajin2lah explore coz IT ini tidak terletak hnya dlm buku/nota sahaja...IT berada disekeliling anda....
tq~
VIRTUAL PRIVATE NETWORK (VPN).....
A VPN - Virtual Private Network - is one solution to establishing long-distance and/or secured network connections. VPNs are normally implemented (deployed) by businesses or organizations rather than by individuals, but virtual networks can be reached from inside a home network. Compared to other technologies, VPNs offers several advantages, particularly benefits for wireless local area networking. For an organization looking to provide a secured network infrastructure for its client base, a VPN offers two main advantages over alternative technologies: cost savings, and network scalability. To the clients accessing these networks, VPNs also bring some benefits of ease of use.
Cost Savings with a VPN
A VPN can save an organization money in several situations:
eliminating the need for expensive long-distance leased lines
reducing long-distance telephone charges
offloading support costs
VPNs vs leased lines
Organizations historically needed to rent network capacity such as T1 lines to achieve full, secured connectivity between their office locations. With a VPN, you use public network infrastructure including the Internet to make these connections and tap into that virtual network through much cheaper local leased lines or even just broadband connections to a nearby Internet Service Provider (ISP).
Long Distance Phone Charges
A VPN also can replace remote access servers and long-distance dialup network connections commonly used in the past by business travelers needing to access to their company intranet. For example, with an Internet VPN, clients need only connect to the nearest service provider's access point that is usually local.
Support Costs
With VPNs, the cost of maintaining servers tends to be less than other approaches because organizations can outsource the needed support from professional third-party service providers. These provides enjoy a much lower cost structure through economy of scale by servicing many business clients.
VPN Network Scalability
The cost to an organization of building a dedicated private network may be reasonable at first but increases exponentially as the organization grows. A company with two branch offices, for example, can deploy just one dedicated line to connect the two locations, but 4 branch offices require 6 lines to directly connect them to each other, 6 branch offices need 15 lines, and so on.
Internet based VPNs avoid this scalability problem by simply tapping into the the public lines and network capability readily available. Particularly for remote and international locations, an Internet VPN offers superior reach and quality of service.
Using a VPN
To use a VPN, each client must possess the appropriate networking software or hardware support on their local network and computers. When set up properly, VPN solutions are easy to use and sometimes can be made to work automatically as part of network sign on.
VPN technology also works well with WiFi local area networking. Some organizations use VPNs to secure wireless connections to their local access points when working inside the office. These solutions provide strong protection without affecting performance excessively.
Limitations of a VPN
Despite their popularity, VPNs are not perfect and limitations exist as is true for any technology. Organizations should consider issues like the below when deploying and using virtual private networks in their operations:
1. VPNs require detailed understanding of network security issues and careful installation / configuration to ensure sufficient protection on a public network like the Internet.
2. The reliability and performance of an Internet-based VPN is not under an organization's direct control. Instead, the solution relies on an ISP and their quality of service.
3. Historically, VPN products and solutions from different vendors have not always been compatible due to issues with VPN technology standards. Attempting to mix and match equipment may cause technical problems, and using equipment from one provider may not give as great a cost savings.
Cost Savings with a VPN
A VPN can save an organization money in several situations:
eliminating the need for expensive long-distance leased lines
reducing long-distance telephone charges
offloading support costs
VPNs vs leased lines
Organizations historically needed to rent network capacity such as T1 lines to achieve full, secured connectivity between their office locations. With a VPN, you use public network infrastructure including the Internet to make these connections and tap into that virtual network through much cheaper local leased lines or even just broadband connections to a nearby Internet Service Provider (ISP).
Long Distance Phone Charges
A VPN also can replace remote access servers and long-distance dialup network connections commonly used in the past by business travelers needing to access to their company intranet. For example, with an Internet VPN, clients need only connect to the nearest service provider's access point that is usually local.
Support Costs
With VPNs, the cost of maintaining servers tends to be less than other approaches because organizations can outsource the needed support from professional third-party service providers. These provides enjoy a much lower cost structure through economy of scale by servicing many business clients.
VPN Network Scalability
The cost to an organization of building a dedicated private network may be reasonable at first but increases exponentially as the organization grows. A company with two branch offices, for example, can deploy just one dedicated line to connect the two locations, but 4 branch offices require 6 lines to directly connect them to each other, 6 branch offices need 15 lines, and so on.
Internet based VPNs avoid this scalability problem by simply tapping into the the public lines and network capability readily available. Particularly for remote and international locations, an Internet VPN offers superior reach and quality of service.
Using a VPN
To use a VPN, each client must possess the appropriate networking software or hardware support on their local network and computers. When set up properly, VPN solutions are easy to use and sometimes can be made to work automatically as part of network sign on.
VPN technology also works well with WiFi local area networking. Some organizations use VPNs to secure wireless connections to their local access points when working inside the office. These solutions provide strong protection without affecting performance excessively.
Limitations of a VPN
Despite their popularity, VPNs are not perfect and limitations exist as is true for any technology. Organizations should consider issues like the below when deploying and using virtual private networks in their operations:
1. VPNs require detailed understanding of network security issues and careful installation / configuration to ensure sufficient protection on a public network like the Internet.
2. The reliability and performance of an Internet-based VPN is not under an organization's direct control. Instead, the solution relies on an ISP and their quality of service.
3. Historically, VPN products and solutions from different vendors have not always been compatible due to issues with VPN technology standards. Attempting to mix and match equipment may cause technical problems, and using equipment from one provider may not give as great a cost savings.
Friday, November 18, 2011
LARGE AREA NETWORK (LAN).....
A computer network that spans a relatively small area. Most LANs are confined to a single building or group of buildings. However, one LAN can be connected to other LANs over any distance via telephone lines and radio waves. A system of LANs connected in this way is called a wide-area network (WAN).
Most LANs connect workstations and personal computers. Each node (individual computer ) in a LAN has its own CPU with which it executes programs, but it also is able to access data and devices anywhere on the LAN. This means that many users can share expensive devices, such as laser printers, as well as data. Users can also use the LAN to communicate with each other, by sending e-mail or engaging in chat sessions.
There are many different types of LANs Ethernets being the most common for PCs. Most Apple Macintosh networks are based on Apple's AppleTalk network system, which is built into Macintosh computers.
The following characteristics differentiate one LAN from another:
Topology : The geometric arrangement of devices on the network. For example, devices can be arranged in a ring or in a straight line.
Protocols : The rules and encoding specifications for sending data. The protocols also determine whether the network uses a peer-to-peer or client/server architecture.
Media : Devices can be connected by twisted-pair wire, coaxial cables, or fiber optic cables. Some networks do without connecting media altogether, communicating instead via radio waves.
LANs are capable of transmitting data at very fast rates, much faster than data can be transmitted over a telephone line; but the distances are limited, and there is also a limit on the number of computers that can be attached to a single LAN.
Most LANs connect workstations and personal computers. Each node (individual computer ) in a LAN has its own CPU with which it executes programs, but it also is able to access data and devices anywhere on the LAN. This means that many users can share expensive devices, such as laser printers, as well as data. Users can also use the LAN to communicate with each other, by sending e-mail or engaging in chat sessions.
There are many different types of LANs Ethernets being the most common for PCs. Most Apple Macintosh networks are based on Apple's AppleTalk network system, which is built into Macintosh computers.
The following characteristics differentiate one LAN from another:
Topology : The geometric arrangement of devices on the network. For example, devices can be arranged in a ring or in a straight line.
Protocols : The rules and encoding specifications for sending data. The protocols also determine whether the network uses a peer-to-peer or client/server architecture.
Media : Devices can be connected by twisted-pair wire, coaxial cables, or fiber optic cables. Some networks do without connecting media altogether, communicating instead via radio waves.
LANs are capable of transmitting data at very fast rates, much faster than data can be transmitted over a telephone line; but the distances are limited, and there is also a limit on the number of computers that can be attached to a single LAN.
WIDE AREA NETWORK (WAN)...
Definition: A WAN spans a large geographic area, such as a state, province or country. WANs often connect multiple smaller networks, such as local area networks (LANs) or metro area networks (MANs).The world's most popular WAN is the Internet. Some segments of the Internet, like VPN-based extranets, are also WANs in themselves. Finally, many WANs are corporate or research networks that utilize leased lines.
WANs generally utilize different and much more expensive networking equipment than do LANs. Key technologies often found in WANs include SONET, Frame Relay, and ATM.
Thursday, November 17, 2011
MODEM ^...^
A modem modulates outgoing digital signals from a computer or other digital device to analog signals for a conventional copper twisted pair telephone line and demodulates the incoming analog signal and converts it to a digital signal for the
In recent years, the 2400 bits per second modem that could carry e-mail has become obsolete. 14.4 Kbps and 28.8 Kbps modems were temporary landing places on the way to the much higher bandwidth devices and carriers of tomorrow. From early 1998, most new personal computers came with 56 Kbps modems. By comparison, using a digital Integrated Services Digital Network adapter instead of a conventional modem, the same telephone wire can now carry up to 128 Kbps. With Digital Subscriber Line (DSL) systems, now being deployed in a number of communities, bandwidth on twisted-pair can be in the megabit range.
In recent years, the 2400 bits per second modem that could carry e-mail has become obsolete. 14.4 Kbps and 28.8 Kbps modems were temporary landing places on the way to the much higher bandwidth devices and carriers of tomorrow. From early 1998, most new personal computers came with 56 Kbps modems. By comparison, using a digital Integrated Services Digital Network adapter instead of a conventional modem, the same telephone wire can now carry up to 128 Kbps. With Digital Subscriber Line (DSL) systems, now being deployed in a number of communities, bandwidth on twisted-pair can be in the megabit range.
GRAPHICAL USER INTERFACE (GUI)..........
Abbreviated GUI (pronounced GOO-ee). A program interface that takes advantage of the computer's graphics capabilities to make the program easier to use. Well-designed graphical user interfaces can free the user from learning complex command languages. On the other hand, many users find that they work more effectively with a command-driven interface, especially if they already know the command language.
Graphical user interfaces, such as Microsoft Windows and the one used by the Apple Macintosh, feature the following basic components:
Pointer : A symbol that appears on the display screen and that you move to select objects and commands. Usually, the pointer appears as a small angled arrow. Text -processing applications, however, use an I-beam pointer that is shaped like a capital I.
Pointing Device : A device, such as a mouse or trackball, that enables you to select objects on the display screen.
Icons : Small pictures that represent commands, files, or windows. By moving the pointer to the icon and pressing a mouse button, you can execute a command or convert the icon into a window. You can also move the icons around the display screen as if they were real objects on your desk.
Desktop : The area on the display screen where icons are grouped is often referred to as the desktop because the icons are intended to represent real objects on a real desktop.
windows: You can divide the screen into different areas. In each window, you can run a different program or display a different file. You can move windows around the display screen, and change their shape and size at will.
Menus : Most graphical user interfaces let you execute commands by selecting a choice from a menu.
The first graphical user interface was designed by Xerox Corporation's Palo Alto Research Center in the 1970s, but it was not until the 1980s and the emergence of the Apple Macintosh that graphical user interfaces became popular. One reason for their slow acceptance was the fact that they require considerable CPU power and a high-quality monitor, which until recently were prohibitively expensive.
In addition to their visual components, graphical user interfaces also make it easier to move data from one application to another. A true GUI includes standard formats for representing text and graphics. Because the formats are well-defined, different programs that run under a common GUI can share data. This makes it possible, for example, to copy a graph created by a spreadsheet program into a document created by a word processor.
Many DOS programs include some features of GUIs, such as menus, but are not graphics based. Such interfaces are sometimes called graphical character-based user interfaces to distinguish them from true GUIs.
Graphical user interfaces, such as Microsoft Windows and the one used by the Apple Macintosh, feature the following basic components:
Pointer : A symbol that appears on the display screen and that you move to select objects and commands. Usually, the pointer appears as a small angled arrow. Text -processing applications, however, use an I-beam pointer that is shaped like a capital I.
Pointing Device : A device, such as a mouse or trackball, that enables you to select objects on the display screen.
Icons : Small pictures that represent commands, files, or windows. By moving the pointer to the icon and pressing a mouse button, you can execute a command or convert the icon into a window. You can also move the icons around the display screen as if they were real objects on your desk.
Desktop : The area on the display screen where icons are grouped is often referred to as the desktop because the icons are intended to represent real objects on a real desktop.
windows: You can divide the screen into different areas. In each window, you can run a different program or display a different file. You can move windows around the display screen, and change their shape and size at will.
Menus : Most graphical user interfaces let you execute commands by selecting a choice from a menu.
The first graphical user interface was designed by Xerox Corporation's Palo Alto Research Center in the 1970s, but it was not until the 1980s and the emergence of the Apple Macintosh that graphical user interfaces became popular. One reason for their slow acceptance was the fact that they require considerable CPU power and a high-quality monitor, which until recently were prohibitively expensive.
In addition to their visual components, graphical user interfaces also make it easier to move data from one application to another. A true GUI includes standard formats for representing text and graphics. Because the formats are well-defined, different programs that run under a common GUI can share data. This makes it possible, for example, to copy a graph created by a spreadsheet program into a document created by a word processor.
Many DOS programs include some features of GUIs, such as menus, but are not graphics based. Such interfaces are sometimes called graphical character-based user interfaces to distinguish them from true GUIs.
URL.....
Abbreviation of Uniform Resource Locator (URL) it is the global address of documents and other resources on the World Wide Web.
The first part of the address is called a protocol identifier and it indicates what protocol to use, and the second part is called a resource name and it specifies the IP address or the domain name where the resource is located. The protocol identifier and the resource name are separated by a colon and two forward slashes.
For example, the two URLs below point to two different files at the domain pcwebopedia.com. The first specifies an executable file that should be fetched using the FTP protocol; the second specifies a Web page that should be fetched using the HTTP protocol:
-ftp://www.pcwebopedia.com/stuff.exe
-http://www.pcwebopedia.com/index.html
A URL is one type of Uniform Resource Identifier (URI); the generic term for all types of names and addresses that refer to objects on the World Wide Web.
The term "Web address" is a synonym for a URL that uses the HTTP / HTTPS protocol.The Uniform Resource Locator (URL) was developed by Tim Berners-Lee in 1994 and the Internet Engineering Task Force (IETF) URI working group. The URL format is specified in RFC 1738 Uniform Resource Locators (URL).
The first part of the address is called a protocol identifier and it indicates what protocol to use, and the second part is called a resource name and it specifies the IP address or the domain name where the resource is located. The protocol identifier and the resource name are separated by a colon and two forward slashes.
For example, the two URLs below point to two different files at the domain pcwebopedia.com. The first specifies an executable file that should be fetched using the FTP protocol; the second specifies a Web page that should be fetched using the HTTP protocol:
-ftp://www.pcwebopedia.com/stuff.exe
-http://www.pcwebopedia.com/index.html
A URL is one type of Uniform Resource Identifier (URI); the generic term for all types of names and addresses that refer to objects on the World Wide Web.
The term "Web address" is a synonym for a URL that uses the HTTP / HTTPS protocol.The Uniform Resource Locator (URL) was developed by Tim Berners-Lee in 1994 and the Internet Engineering Task Force (IETF) URI working group. The URL format is specified in RFC 1738 Uniform Resource Locators (URL).
Tuesday, November 15, 2011
BROADBAND........
The term broadband refers to a telecommunications signal or device of greater bandwidth, in some sense, than another standard or usual signal or device (and the broader the band, the greater the capacity for traffic). Different criteria for "broad" have been applied in different contexts and at different times. Its origin is in physics, acoustics and radio systems engineering, where it had been used with a meaning similar to wide band.However, the term became popularized through the 1990s as a vague marketing term for Internet access.
Broadband in telecommunications refers to a signaling method that includes or handles a relatively wide range (or band) of frequencies. Broadband is always a relative term, understood according to its context. The wider (or broader) the bandwidth of a channel, the greater the information-carrying capacity, given the same channel quality. In radio, for example, a very narrow-band signal will carry Morse code; a broader band will carry speech; a still broader band is required to carry music without losing the high audio frequencies required for realistic sound reproduction. This broad band is often divided into channels or frequency bins using passband techniques to allow frequency-division multiplexing, instead of sending one higher-quality signal. A television antenna described as "broadband" may be capable of receiving a wide range of channels; while a single-frequency or Lo-VHF antenna is "narrowband" since it only receives 1 to 5 channels. The US federal standard FS-1037C defines "broadband" just as a synonym for wideband.
In data communications a 56k modem will transmit a data rate of 56 kilobits per second (kbit/s) over a 4 kilohertz wide telephone line (narrowband or voiceband). The various forms of Digital Subscriber Line (DSL) services are broadband in the sense that digital information is sent over a high-bandwidth channel. This channel is at higher frequency than the baseband voice channel, so it can support plain old telephone service on a single pair of wires at the same time. However when that same line is converted to a non-loaded twisted-pair wire (no telephone filters), it becomes hundreds of kilohertz wide (broadband) and can carry several megabits per second using very-high-bitrate digital subscriber line (VDSL) techniques.
In the late 1980s, the Broadband Integrated Services Digital Network (B-ISDN) used the term to refer to a broad range of bit rates, independent of physical modulation details.
Many computer networks use a simple line code to transmit one type of signal using a medium's full bandwidth using its baseband (from zero through the highest frequency needed). Most versions of the popular Ethernet family are given names such as the original 1980s 10BASE5 to indicate this. Networks that use cable modems on standard cable television infrastructure are called broadband to indicate the wide range of frequencies that can include multiple data users as well as traditional television channels on the same cable. Broadband systems usually use a different radio frequency modulated by the data signal for each band. The total bandwidth of the medium is larger than the bandwidth of any channel. The 10BROAD36 broadband variant of Ethernet was standardized by 1985, but was not commercially successful.The DOCSIS standard became available to consumers in the late 1990s, to provide Internet access to cable television residential customers. Matters were further confused by the fact that the 10PASS-TS standard for Ethernet ratified in 2008 used DSL technology, and both cable and DSL modems often have Ethernet connectors on them.
Power lines have also been used for various types of data communication. Although some systems for remote control are based on narrowband signaling, modern high-speed systems use broadband signaling to achieve very high data rates. One example is the ITU-T G.hn standard, which provides a way to create a high-speed (up to 1 Gigabit/s) Local area network using existing home wiring (including power lines, but also phone lines and coaxial cables).
Broadband in analog video distribution is traditionally used to refer to systems such as cable television, where the individual channels are modulated on carriers at fixed frequencies.In this context, baseband is the term's antonym, referring to a single channel of analog video, typically in composite form with separate baseband audio.The act of demodulating converts broadband video to baseband video.
However, broadband video in the context of streaming Internet video has come to mean video files that have bitrates high enough to require broadband Internet access in order to view them.Broadband video is also sometimes used to describe IPTV Video on demand.
Internet access
The standards group CCITT defined "broadband service" in 1988 as requiring transmission channels capable of supporting bit rates greater than the primary rate which ranged from about 1.5 to 2 Mbit/s.The US National Information Infrastructure project during the 1990s brought the term into public policy debates. Broadband became a marketing buzzword for telephone and cable companies to sell their more expensive higher data rate products, especially for Internet access. In the US National Broadband Plan of 2009 it was defined as "Internet access that is always on and faster than the traditional dial-up access".The same agency has defined it differently through the years.
Even though information signals generally travel nearly the speed of light in the medium no matter what the bit rate, higher rate services are often marketed as "faster" or "higher speeds".(This use of the word "speed" may or may not be appropriate, depending on context. It would be accurate, for instance, to say that a file of a given size will typically take less time to finish transferring if it is being transmitted via broadband as opposed to dial-up.) Consumers are also targeted by advertisements for peak transmission rates, while actual end-to-end rates observed in practice can be lower due to other factors.
Broadband in telecommunications refers to a signaling method that includes or handles a relatively wide range (or band) of frequencies. Broadband is always a relative term, understood according to its context. The wider (or broader) the bandwidth of a channel, the greater the information-carrying capacity, given the same channel quality. In radio, for example, a very narrow-band signal will carry Morse code; a broader band will carry speech; a still broader band is required to carry music without losing the high audio frequencies required for realistic sound reproduction. This broad band is often divided into channels or frequency bins using passband techniques to allow frequency-division multiplexing, instead of sending one higher-quality signal. A television antenna described as "broadband" may be capable of receiving a wide range of channels; while a single-frequency or Lo-VHF antenna is "narrowband" since it only receives 1 to 5 channels. The US federal standard FS-1037C defines "broadband" just as a synonym for wideband.
In data communications a 56k modem will transmit a data rate of 56 kilobits per second (kbit/s) over a 4 kilohertz wide telephone line (narrowband or voiceband). The various forms of Digital Subscriber Line (DSL) services are broadband in the sense that digital information is sent over a high-bandwidth channel. This channel is at higher frequency than the baseband voice channel, so it can support plain old telephone service on a single pair of wires at the same time. However when that same line is converted to a non-loaded twisted-pair wire (no telephone filters), it becomes hundreds of kilohertz wide (broadband) and can carry several megabits per second using very-high-bitrate digital subscriber line (VDSL) techniques.
In the late 1980s, the Broadband Integrated Services Digital Network (B-ISDN) used the term to refer to a broad range of bit rates, independent of physical modulation details.
Many computer networks use a simple line code to transmit one type of signal using a medium's full bandwidth using its baseband (from zero through the highest frequency needed). Most versions of the popular Ethernet family are given names such as the original 1980s 10BASE5 to indicate this. Networks that use cable modems on standard cable television infrastructure are called broadband to indicate the wide range of frequencies that can include multiple data users as well as traditional television channels on the same cable. Broadband systems usually use a different radio frequency modulated by the data signal for each band. The total bandwidth of the medium is larger than the bandwidth of any channel. The 10BROAD36 broadband variant of Ethernet was standardized by 1985, but was not commercially successful.The DOCSIS standard became available to consumers in the late 1990s, to provide Internet access to cable television residential customers. Matters were further confused by the fact that the 10PASS-TS standard for Ethernet ratified in 2008 used DSL technology, and both cable and DSL modems often have Ethernet connectors on them.
Power lines have also been used for various types of data communication. Although some systems for remote control are based on narrowband signaling, modern high-speed systems use broadband signaling to achieve very high data rates. One example is the ITU-T G.hn standard, which provides a way to create a high-speed (up to 1 Gigabit/s) Local area network using existing home wiring (including power lines, but also phone lines and coaxial cables).
Broadband in analog video distribution is traditionally used to refer to systems such as cable television, where the individual channels are modulated on carriers at fixed frequencies.In this context, baseband is the term's antonym, referring to a single channel of analog video, typically in composite form with separate baseband audio.The act of demodulating converts broadband video to baseband video.
However, broadband video in the context of streaming Internet video has come to mean video files that have bitrates high enough to require broadband Internet access in order to view them.Broadband video is also sometimes used to describe IPTV Video on demand.
Internet access
The standards group CCITT defined "broadband service" in 1988 as requiring transmission channels capable of supporting bit rates greater than the primary rate which ranged from about 1.5 to 2 Mbit/s.The US National Information Infrastructure project during the 1990s brought the term into public policy debates. Broadband became a marketing buzzword for telephone and cable companies to sell their more expensive higher data rate products, especially for Internet access. In the US National Broadband Plan of 2009 it was defined as "Internet access that is always on and faster than the traditional dial-up access".The same agency has defined it differently through the years.
Even though information signals generally travel nearly the speed of light in the medium no matter what the bit rate, higher rate services are often marketed as "faster" or "higher speeds".(This use of the word "speed" may or may not be appropriate, depending on context. It would be accurate, for instance, to say that a file of a given size will typically take less time to finish transferring if it is being transmitted via broadband as opposed to dial-up.) Consumers are also targeted by advertisements for peak transmission rates, while actual end-to-end rates observed in practice can be lower due to other factors.
E-BUSINESS...
Electronic business, commonly referred to as "eBusiness" or "e-business", or an internet business, may be defined as the application of information and communication technologies (ICT) in support of all the activities of business. Commerce constitutes the exchange of products and services between businesses, groups and individuals and can be seen as one of the essential activities of any business. Electronic commerce focuses on the use of ICT to enable the external activities and relationships of the business with individuals, groups and other businesses.The term "e-business" was coined by IBM's marketing and Internet teams in 1996.
Electronic business methods enable companies to link their internal and external data processing systems more efficiently and flexibly, to work more closely with suppliers and partners, and to better satisfy the needs and expectations of their customers.
In practice, e-business is more than just e-commerce. While e-business refers to more strategic focus with an emphasis on the functions that occur using electronic capabilities, e-commerce is a subset of an overall e-business strategy. E-commerce seeks to add revenue streams using the World Wide Web or the Internet to build and enhance relationships with clients and partners and to improve efficiency using the Empty Vessel strategy
E-business involves business processes spanning the entire value chain: electronic purchasing and supply chain management, processing orders electronically, handling customer service, and cooperating with business partners. Special technical standards for e-business facilitate the exchange of data between companies. E-business software solutions allow the integration of intra and inter firm business processes. E-business can be conducted using the Web, the Internet, intranets, extranets, or some combination of these.
Basically, electronic commerce (EC) is the process of buying, transferring, or exchanging products, services, and/or information via computer networks, including the internet. EC can also be beneficial from many perspectives including business process, service, learning, collaborative, community. EC is often confused with e-business.
Electronic business methods enable companies to link their internal and external data processing systems more efficiently and flexibly, to work more closely with suppliers and partners, and to better satisfy the needs and expectations of their customers.
In practice, e-business is more than just e-commerce. While e-business refers to more strategic focus with an emphasis on the functions that occur using electronic capabilities, e-commerce is a subset of an overall e-business strategy. E-commerce seeks to add revenue streams using the World Wide Web or the Internet to build and enhance relationships with clients and partners and to improve efficiency using the Empty Vessel strategy
E-business involves business processes spanning the entire value chain: electronic purchasing and supply chain management, processing orders electronically, handling customer service, and cooperating with business partners. Special technical standards for e-business facilitate the exchange of data between companies. E-business software solutions allow the integration of intra and inter firm business processes. E-business can be conducted using the Web, the Internet, intranets, extranets, or some combination of these.
Basically, electronic commerce (EC) is the process of buying, transferring, or exchanging products, services, and/or information via computer networks, including the internet. EC can also be beneficial from many perspectives including business process, service, learning, collaborative, community. EC is often confused with e-business.
ELECTRONIC BUSINESS SECURITY.
E-Business systems naturally have greater security risks than traditional business systems, therefore it is important for e-business systems to be fully protected against these risks. A far greater number of people have access to e-businesses through the internet than would have access to a traditional business. Customers, suppliers, employees, and numerous other people use any particular e-business system daily and expect their confidential information to stay secure. Hackers are one of the great threats to the security of e-businesses. Some common security concerns for e-Businesses include keeping business and customer information private and confidential, authenticity of data, and data integrity. Some of the methods of protecting e-business security and keeping information secure include physical security measures as well as data storage, data transmission, anti-virus software, firewalls, and encryption to list a few.
Privacy and Confidentially.
Confidentiality is the extent to which businesses makes personal information available to other businesses and individuals. With any business, confidential information must remain secure and only be accessible to the intended recipient. However, this becomes even more difficult when dealing with e-businesses specifically. To keep such information secure means protecting any electronic records and files from unauthorized access, as well as ensuring safe transmission and data storage of such information. Tools such as encryption and firewalls manage this specific concern within e-business.
Authenticity
E-business transactions pose greater challenges for establishing authenticity due to the ease with which electronic information may be altered and copied. Both parties in an e-business transaction want to have the assurance that the other party is who they claim to be, especially when a customer places an order and then submits a payment electronically. One common way to ensure this is to limit access to a network or trusted parties by using a virtual private network (VPN) technology. The establishment of authenticity is even greater when a combination of techniques are used, and such techniques involve checking “something you know” (i.e. password or PIN), “something you have” (i.e. credit card), or “something you are” (i.e. digital signatures or voice recognition methods). Many times in e-business, however, “something you are” is pretty strongly verified by checking the purchaser’s “something you have” (i.e. credit card) and “something you know” (i.e. card number).
Data integrity
Data integrity answers the question “Can the information be changed or corrupted in any way?” This leads to the assurance that the message received is identical to the message sent. A business needs to be confident that data is not changed in transit, whether deliberately or by accident. To help with data integrity, firewalls protect stored data against unauthorized access, while simply backing up data allows recovery should the data or equipment be damaged.
Non-repudiation
This concern deals with the existence of proof in a transaction. A business must have assurance that the receiving party or purchaser cannot deny that a transaction has occurred, and this means having sufficient evidence to prove the transaction. One way to address non-repudiation is using digital signatures. A digital signature not only ensures that a message or document has been electronically signed by the person, but since a digital signature can only be created by one person, it also ensures that this person cannot later deny that they provided their signature.
Access control
When certain electronic resources and information is limited to only a few authorized individuals, a business and its customers must have the assurance that no one else can access the systems or information. Fortunately, there are a variety of techniques to address this concern including firewalls, access privileges, user identification and authentication techniques (such as passwords and digital certificates), Virtual Private Networks (VPN), and much more.
Availability
This concern is specifically pertinent to a business’ customers as certain information must be available when customers need it. Messages must be delivered in a reliable and timely fashion, and information must be stored and retrieved as required. Because availability of service is important for all e-business websites, steps must be taken to prevent disruption of service by events such as power outages and damage to physical infrastructure. Examples to address this include data backup, fire-suppression systems, Uninterrupted Power Supply (UPS) systems, virus protection, as well as making sure that there is sufficient capacity to handle the demands posed by heavy network traffic.
Common Security Measures for E-Business Systems
Many different forms of security exist for e-businesses. Some general security guidelines include areas in physical security, data storage, data transmission, application development, and system administration.
Physical security
Despite e-business being business done online, there are still physical security measures that can be taken to protect the business as a whole. Even though business is done online, the building that houses the servers and computers must be protected and have limited access to employees and other persons. For example, this room should only allow authorized users to enter, and should ensure that “windows, dropped ceilings, large air ducts, and raised floors” do not allow easy access to unauthorized persons. Preferably these important items would be kept in an air-conditioned room without any windows.
Protecting against the environment is equally important in physical security as protecting against unauthorized users. The room may protect the equipment against flooding by keeping all equipment raised off of the floor. In addition, the room should contain a fire extinguisher in case of fire. The organization should have a fire plan in case this situation arises.
In addition to keeping the servers and computers safe, physical security of confidential information is important. This includes client information such as credit card numbers, checks, phone numbers, etc. It also includes any of the organization's private information. Locking physical and electronic copies of this data in a drawer or cabinet is one additional measure of security. Doors and windows leading into this area should also be securely locked. Only employees that need to use this information as part of their job should be given keys.
Important information can also be kept secure by keeping backups of files and updating them on a regular basis. It is best to keep these backups in a separate secure location in case there is a natural disaster or breach of security at the main location.
“Failover sites” can be built in case there is a problem with the main location. This site should be just like the main location in terms of hardware, software, and security features. This site can be used in case of fire or natural disaster at the original site. It is also important to test the “failover site” to ensure it will actually work if the need arises.
State of the art security systems, such as the one used at Tidepoint's headquarters, might include access control, alarm systems, and closed-circuit television. One form of access control is face (or another feature) recognition systems. This allows only authorized personnel to enter, and also serves the purpose of convenience for employees who don't have to carry keys or cards. Cameras can also be placed throughout the building and at all points of entry. Alarm systems also serve as an added measure of protection against theft.
Data storage
Storing data in a secure manner is very important to all businesses, but especially to e-businesses where most of the data is stored in an electronic manner. Data that is confidential should not be stored on the e-business' server, but instead moved to another physical machine to be stored. If possible this machine should not be directly connected to the internet, and should also be stored in a safe location. The information should be stored in an encrypted format.
Any highly sensitive information should not be stored if it is possible. If it does need to be stored, it should be kept on only a few reliable machines to prevent easy access. Extra security measures should be taken to protect this information (such as private keys) if possible. Additionally, information should only be kept for a short period of time, and once it is no longer necessary it should be deleted to prevent it from falling into the wrong hands.
Similarly, backups and copies of information should be kept secure with the same security measures as the original information. Once a backup is no longer needed, it should be carefully but thoroughly destroyed.
Data transmission and application development
All sensitive information being transmitted should be encrypted. Businesses can opt to refuse clients who can't accept this level of encryption. Confidential and sensitive information should also never be sent through e-mail. If it must be, then it should also be encrypted.
Transferring and displaying secure information should be kept to a minimum. This can be done by never displaying a full credit card number for example. Only a few of the numbers may be shown, and changes to this information can be done without displaying the full number. It should also be impossible to retrieve this information online.Source code should also be kept in a secure location. It should not be visible to the public.Applications and changes should be tested before they are placed online for reliability and compatibility.
System administration
Security on default operating systems should be increased immediately. Patches and software updates should be applied in a timely manner. All system configuration changes should be kept in a log and promptly updated.
System administrators should keep watch for suspicious activity within the business by inspecting log files and researching repeated logon failures. They can also audit their e-business system and look for any holes in the security measures. It is important to make sure plans for security are in place but also to test the security measures to make sure they actually work. With the use of social engineering, the wrong people can get a hold of confidential information. To protect against this, staff can be made aware of social engineering and trained to properly deal with sensitive information.
E-businesses may use passwords for employee logons, accessing secure information, or by customers. Passwords should be made impossible to guess. They should consist of both letters and numbers, and be at least seven to eight digits long. They should not contain any names, birth dates, etc. Passwords should be changed frequently and should be unique each time. Only the password's user should know the password and it should never be written down or stored anywhere. Users should also be locked out of the system after a certain number of failed logon attempts to prevent guessing of passwords.
Security Solutions
When it comes to security solutions, there are some main goals that are to be met. These goals are data integrity, strong authentication, and privacy.
Access and data integrity
There are several different ways to prevent access to the data that is kept online. One way is to use anti-virus software. This is something that most people use to protect their networks regardless of the data they have. E-businesses should use this because they can then be sure that the information sent and received to their system is clean. A second way to protect the data is to use firewalls and network protection. A firewall is used to restrict access to private networks, as well as public networks that a company may use. The firewall also has the ability to log attempts into the network and provide warnings as it is happening.
They are very beneficial to keep third-parties out of the network. Businesses that use Wi-Fi need to consider different forms of protection because these networks are easier for someone to access. They should look into protected access, virtual private networks, or internet protocol security. Another option they have is an intrusion detection system. This system alerts when there are possible intrusions. Some companies set up traps or “hot spots” to attract people and are then able to know when someone is trying to hack into that area.
Encryption
Encryption, which is actually a part of cryptography, involves transforming texts or messages into a code which is unreadable. These messages have to be decrypted in order to be understandable or usable for someone. There is a key that identifies the data to a certain person or company. With public key encryption, there are actually two keys used. One is public and one is private. The public one is used for encryption, and the private for decryption. The level of the actual encryption can be adjusted and should be based on the information. The key can be just a simple slide of letters or a completely random mix-up of letters. This is relatively easy to implement because there is software that a company can purchase. A company needs to be sure that their keys are registered with a certificate authority.
Digital certificates
The point of a digital certificate is to identify the owner of a document. This way the receiver knows that it is an authentic document. Companies can use these certificates in several different ways. They can be used as a replacement for user names and passwords. Each employee can be given these to access the documents that they need from wherever they are. These certificates also use encryption. They are a little more complicated than normal encryption however. They actually used important information within the code. They do this in order to assure authenticity of the documents as well as confidentiality and data integrity which always accompany encryption. Digital certificates are not commonly used because they are confusing for people to implement. There can be complications when using different browsers, which means they need to use multiple certificates. The process is being adjusted so that it is easier to use.
Digital Signatures
A final way to secure information online would be to use a digital signature. If a document has a digital signature on it, no one else is able to edit the information without being detected. That way if it is edited, it may be adjusted for reliability after the fact. In order to use a digital signature, one must use a combination of cryptography and a message digest. A message digest is used to give the document a unique value. That value is then encrypted with the sender’s private key.
Privacy and Confidentially.
Confidentiality is the extent to which businesses makes personal information available to other businesses and individuals. With any business, confidential information must remain secure and only be accessible to the intended recipient. However, this becomes even more difficult when dealing with e-businesses specifically. To keep such information secure means protecting any electronic records and files from unauthorized access, as well as ensuring safe transmission and data storage of such information. Tools such as encryption and firewalls manage this specific concern within e-business.
Authenticity
E-business transactions pose greater challenges for establishing authenticity due to the ease with which electronic information may be altered and copied. Both parties in an e-business transaction want to have the assurance that the other party is who they claim to be, especially when a customer places an order and then submits a payment electronically. One common way to ensure this is to limit access to a network or trusted parties by using a virtual private network (VPN) technology. The establishment of authenticity is even greater when a combination of techniques are used, and such techniques involve checking “something you know” (i.e. password or PIN), “something you have” (i.e. credit card), or “something you are” (i.e. digital signatures or voice recognition methods). Many times in e-business, however, “something you are” is pretty strongly verified by checking the purchaser’s “something you have” (i.e. credit card) and “something you know” (i.e. card number).
Data integrity
Data integrity answers the question “Can the information be changed or corrupted in any way?” This leads to the assurance that the message received is identical to the message sent. A business needs to be confident that data is not changed in transit, whether deliberately or by accident. To help with data integrity, firewalls protect stored data against unauthorized access, while simply backing up data allows recovery should the data or equipment be damaged.
Non-repudiation
This concern deals with the existence of proof in a transaction. A business must have assurance that the receiving party or purchaser cannot deny that a transaction has occurred, and this means having sufficient evidence to prove the transaction. One way to address non-repudiation is using digital signatures. A digital signature not only ensures that a message or document has been electronically signed by the person, but since a digital signature can only be created by one person, it also ensures that this person cannot later deny that they provided their signature.
Access control
When certain electronic resources and information is limited to only a few authorized individuals, a business and its customers must have the assurance that no one else can access the systems or information. Fortunately, there are a variety of techniques to address this concern including firewalls, access privileges, user identification and authentication techniques (such as passwords and digital certificates), Virtual Private Networks (VPN), and much more.
Availability
This concern is specifically pertinent to a business’ customers as certain information must be available when customers need it. Messages must be delivered in a reliable and timely fashion, and information must be stored and retrieved as required. Because availability of service is important for all e-business websites, steps must be taken to prevent disruption of service by events such as power outages and damage to physical infrastructure. Examples to address this include data backup, fire-suppression systems, Uninterrupted Power Supply (UPS) systems, virus protection, as well as making sure that there is sufficient capacity to handle the demands posed by heavy network traffic.
Common Security Measures for E-Business Systems
Many different forms of security exist for e-businesses. Some general security guidelines include areas in physical security, data storage, data transmission, application development, and system administration.
Physical security
Despite e-business being business done online, there are still physical security measures that can be taken to protect the business as a whole. Even though business is done online, the building that houses the servers and computers must be protected and have limited access to employees and other persons. For example, this room should only allow authorized users to enter, and should ensure that “windows, dropped ceilings, large air ducts, and raised floors” do not allow easy access to unauthorized persons. Preferably these important items would be kept in an air-conditioned room without any windows.
Protecting against the environment is equally important in physical security as protecting against unauthorized users. The room may protect the equipment against flooding by keeping all equipment raised off of the floor. In addition, the room should contain a fire extinguisher in case of fire. The organization should have a fire plan in case this situation arises.
In addition to keeping the servers and computers safe, physical security of confidential information is important. This includes client information such as credit card numbers, checks, phone numbers, etc. It also includes any of the organization's private information. Locking physical and electronic copies of this data in a drawer or cabinet is one additional measure of security. Doors and windows leading into this area should also be securely locked. Only employees that need to use this information as part of their job should be given keys.
Important information can also be kept secure by keeping backups of files and updating them on a regular basis. It is best to keep these backups in a separate secure location in case there is a natural disaster or breach of security at the main location.
“Failover sites” can be built in case there is a problem with the main location. This site should be just like the main location in terms of hardware, software, and security features. This site can be used in case of fire or natural disaster at the original site. It is also important to test the “failover site” to ensure it will actually work if the need arises.
State of the art security systems, such as the one used at Tidepoint's headquarters, might include access control, alarm systems, and closed-circuit television. One form of access control is face (or another feature) recognition systems. This allows only authorized personnel to enter, and also serves the purpose of convenience for employees who don't have to carry keys or cards. Cameras can also be placed throughout the building and at all points of entry. Alarm systems also serve as an added measure of protection against theft.
Data storage
Storing data in a secure manner is very important to all businesses, but especially to e-businesses where most of the data is stored in an electronic manner. Data that is confidential should not be stored on the e-business' server, but instead moved to another physical machine to be stored. If possible this machine should not be directly connected to the internet, and should also be stored in a safe location. The information should be stored in an encrypted format.
Any highly sensitive information should not be stored if it is possible. If it does need to be stored, it should be kept on only a few reliable machines to prevent easy access. Extra security measures should be taken to protect this information (such as private keys) if possible. Additionally, information should only be kept for a short period of time, and once it is no longer necessary it should be deleted to prevent it from falling into the wrong hands.
Similarly, backups and copies of information should be kept secure with the same security measures as the original information. Once a backup is no longer needed, it should be carefully but thoroughly destroyed.
Data transmission and application development
All sensitive information being transmitted should be encrypted. Businesses can opt to refuse clients who can't accept this level of encryption. Confidential and sensitive information should also never be sent through e-mail. If it must be, then it should also be encrypted.
Transferring and displaying secure information should be kept to a minimum. This can be done by never displaying a full credit card number for example. Only a few of the numbers may be shown, and changes to this information can be done without displaying the full number. It should also be impossible to retrieve this information online.Source code should also be kept in a secure location. It should not be visible to the public.Applications and changes should be tested before they are placed online for reliability and compatibility.
System administration
Security on default operating systems should be increased immediately. Patches and software updates should be applied in a timely manner. All system configuration changes should be kept in a log and promptly updated.
System administrators should keep watch for suspicious activity within the business by inspecting log files and researching repeated logon failures. They can also audit their e-business system and look for any holes in the security measures. It is important to make sure plans for security are in place but also to test the security measures to make sure they actually work. With the use of social engineering, the wrong people can get a hold of confidential information. To protect against this, staff can be made aware of social engineering and trained to properly deal with sensitive information.
E-businesses may use passwords for employee logons, accessing secure information, or by customers. Passwords should be made impossible to guess. They should consist of both letters and numbers, and be at least seven to eight digits long. They should not contain any names, birth dates, etc. Passwords should be changed frequently and should be unique each time. Only the password's user should know the password and it should never be written down or stored anywhere. Users should also be locked out of the system after a certain number of failed logon attempts to prevent guessing of passwords.
Security Solutions
When it comes to security solutions, there are some main goals that are to be met. These goals are data integrity, strong authentication, and privacy.
Access and data integrity
There are several different ways to prevent access to the data that is kept online. One way is to use anti-virus software. This is something that most people use to protect their networks regardless of the data they have. E-businesses should use this because they can then be sure that the information sent and received to their system is clean. A second way to protect the data is to use firewalls and network protection. A firewall is used to restrict access to private networks, as well as public networks that a company may use. The firewall also has the ability to log attempts into the network and provide warnings as it is happening.
They are very beneficial to keep third-parties out of the network. Businesses that use Wi-Fi need to consider different forms of protection because these networks are easier for someone to access. They should look into protected access, virtual private networks, or internet protocol security. Another option they have is an intrusion detection system. This system alerts when there are possible intrusions. Some companies set up traps or “hot spots” to attract people and are then able to know when someone is trying to hack into that area.
Encryption
Encryption, which is actually a part of cryptography, involves transforming texts or messages into a code which is unreadable. These messages have to be decrypted in order to be understandable or usable for someone. There is a key that identifies the data to a certain person or company. With public key encryption, there are actually two keys used. One is public and one is private. The public one is used for encryption, and the private for decryption. The level of the actual encryption can be adjusted and should be based on the information. The key can be just a simple slide of letters or a completely random mix-up of letters. This is relatively easy to implement because there is software that a company can purchase. A company needs to be sure that their keys are registered with a certificate authority.
Digital certificates
The point of a digital certificate is to identify the owner of a document. This way the receiver knows that it is an authentic document. Companies can use these certificates in several different ways. They can be used as a replacement for user names and passwords. Each employee can be given these to access the documents that they need from wherever they are. These certificates also use encryption. They are a little more complicated than normal encryption however. They actually used important information within the code. They do this in order to assure authenticity of the documents as well as confidentiality and data integrity which always accompany encryption. Digital certificates are not commonly used because they are confusing for people to implement. There can be complications when using different browsers, which means they need to use multiple certificates. The process is being adjusted so that it is easier to use.
Digital Signatures
A final way to secure information online would be to use a digital signature. If a document has a digital signature on it, no one else is able to edit the information without being detected. That way if it is edited, it may be adjusted for reliability after the fact. In order to use a digital signature, one must use a combination of cryptography and a message digest. A message digest is used to give the document a unique value. That value is then encrypted with the sender’s private key.
Sunday, November 13, 2011
HISTORY OF DATA MINING
The manual extraction of patterns from data has occurred for centuries. Early methods of identifying patterns in data include Bayes' theorem (1700s) and regression analysis (1800s). The proliferation, ubiquity and increasing power of computer technology has increased data collection, storage and manipulations. As data sets have grown in size and complexity, direct hands-on data analysis has increasingly been augmented with indirect, automatic data processing. This has been aided by other discoveries in computer science, such as neural networks, clustering, genetic algorithms (1950s), decision trees (1960s) and support vector machines (1990s). Data mining is the process of applying these methods to data with the intention of uncovering hidden patterns. It has been used for many years by businesses, scientists and governments to sift through volumes of data such as airline passenger trip records, census data and supermarket scanner data to produce market research reports. (Note, however, that reporting is not always considered to be data mining.)
A primary reason for using data mining is to assist in the analysis of collections of observations of behavior. Such data is vulnerable to collinearity because of unknown interrelations. An unavoidable fact of data mining is that the (sub-)set(s) of data being analyzed may not be representative of the whole domain, and therefore may not contain examples of certain critical relationships and behaviors that exist across other parts of the domain. To address this sort of issue, the analysis may be augmented using experiment-based and other approaches, such as choice modelling for human-generated data. In these situations, inherent correlations can be either controlled for, or removed altogether, during the construction of the experimental design.
A primary reason for using data mining is to assist in the analysis of collections of observations of behavior. Such data is vulnerable to collinearity because of unknown interrelations. An unavoidable fact of data mining is that the (sub-)set(s) of data being analyzed may not be representative of the whole domain, and therefore may not contain examples of certain critical relationships and behaviors that exist across other parts of the domain. To address this sort of issue, the analysis may be augmented using experiment-based and other approaches, such as choice modelling for human-generated data. In these situations, inherent correlations can be either controlled for, or removed altogether, during the construction of the experimental design.
DATA MINING
Data mining (the analysis step of the knowledge discovery in databases process, or KDD), a relatively young and interdisciplinary field of computer science is the process of discovering new patterns from large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics and database systems.The goal of data mining is to extract knowledge from a data set in a human-understandable structure and involves database and data management, data preprocessing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of found structure, visualization and online updating.
The term is a buzzword, and is frequently misused to mean any form of large scale data or information processing (collection, extraction, warehousing, analysis and statistics) but also generalized to any kind of computer decision support system including artificial intelligence, machine learning and business intelligence. In the proper use of the word, the key term is discovery, commonly defined as "detecting something new". Even the popular book "Data mining: Practical machine learning tools and techniques with Java" (which covers mostly machine learning material) was originally to be named just "Practical machine learning", and the term "data mining" was only added for marketing reasons. Often the more general terms "(large scale) data analysis" or "analytics" or when referring to actual methods, artificial intelligence and machine learning are more appropriate.
The actual data-mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indexes. These patterns can then be seen as a kind of summary of the input data, and used in further analysis or for example in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps.
The related terms data dredging, data fishing and data snooping refer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations.
The term is a buzzword, and is frequently misused to mean any form of large scale data or information processing (collection, extraction, warehousing, analysis and statistics) but also generalized to any kind of computer decision support system including artificial intelligence, machine learning and business intelligence. In the proper use of the word, the key term is discovery, commonly defined as "detecting something new". Even the popular book "Data mining: Practical machine learning tools and techniques with Java" (which covers mostly machine learning material) was originally to be named just "Practical machine learning", and the term "data mining" was only added for marketing reasons. Often the more general terms "(large scale) data analysis" or "analytics" or when referring to actual methods, artificial intelligence and machine learning are more appropriate.
The actual data-mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indexes. These patterns can then be seen as a kind of summary of the input data, and used in further analysis or for example in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps.
The related terms data dredging, data fishing and data snooping refer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations.
HISTORY OF INTERNET 2
As the Internet gained in public recognition and popularity, universities were among the first institutions to outgrow the Internet's bandwidth limitations because of the data transfer requirements that many academic researchers need to collaborate with their colleagues. Some universities wanted to support high-performance applications like data mining, medical imaging and particle physics. This resulted in the creation of the very-high-performance Backbone Network Service, or vBNS, developed in 1995 by the National Science Foundation (NSF) and MCI, specifically to meet the needs of the supercomputers at educational institutions. The concept of the “next-generation Internet” was born. After the expiration of the NSF agreement, vBNS largely transitioned to providing service to the government. As a result, the research and education community founded Internet2 to serve its unique networking needs.
The Internet2 Project was originally established by 34 university researchers in 1996 under the auspices of EDUCOM (later EDUCAUSE), and was formally organized as the not-for-profit University Corporation for Advanced Internet Development (UCAID) in 1997. It later changed its name to Internet2. Internet2 is a registered trademark. The Internet2 consortium administrative headquarters is located in Ann Arbor, Michigan, with offices in Washington, D.C.
The Internet2 community, in partnership with Qwest, built the first Internet2 Network, called Abilene, in 1998 and was a prime investor in the National LambdaRail (NLR) project. During 2004–2006, Internet2 and NLR held extensive discussions regarding a possible merger. Those talks paused in spring, 2006, resumed in March, 2007, but eventually ceased in the fall of 2007, due to unresolved differences.
In 2006, Internet2 announced a partnership with Level 3 Communications to launch a brand new nationwide network, boosting its capacity from 10 Gbit/s to 100 Gbit/s. In October, 2007, Internet2 officially retired Abilene and now refers to its new, higher capacity network as the Internet2 Network
Objectives
Internet2 provides the U.S. research and education community with a network that satisfies their bandwidth-intensive requirements. The network itself is a dynamic, robust and cost-effective hybrid optical and packet network. It furnishes a 100 Gbit/s network backbone to more than 210 U.S. educational institutions, 70 corporations and 45 non-profit and government agencies.
The objectives of the Internet2 consortium are:
Developing and maintaining a leading-edge network.
Fully exploiting the capabilities of broadband connections through the use of new-generation applications.
Transferring new network services and applications to all levels of educational use, and eventually the broader Internet community.
The uses of the network span from collaborative applications, distributed research experiments, grid-based data analysis to social networking. Some of these applications are in varying levels of commercialization, such as IPv6, open-source middleware for secure network access, Layer 2 VPNs and dynamic circuit networks.
Achievements
These technologies and their organizational counterparts were not only created to make a faster alternative to the Internet. Many fields have been able to use the Abilene network to foster creativity, research, and development in a way that was not previously possible. Users of poor quality libraries can now download not only text but sound recordings, animations, videos, and other resources, which would be otherwise unavailable. Another application is the robust video conferencing now available to Internet2 participants. Neurosurgeons can now video conference with other experts in the field during an operation in a high resolution format with no apparent time lag.
The Internet2 Project was originally established by 34 university researchers in 1996 under the auspices of EDUCOM (later EDUCAUSE), and was formally organized as the not-for-profit University Corporation for Advanced Internet Development (UCAID) in 1997. It later changed its name to Internet2. Internet2 is a registered trademark. The Internet2 consortium administrative headquarters is located in Ann Arbor, Michigan, with offices in Washington, D.C.
The Internet2 community, in partnership with Qwest, built the first Internet2 Network, called Abilene, in 1998 and was a prime investor in the National LambdaRail (NLR) project. During 2004–2006, Internet2 and NLR held extensive discussions regarding a possible merger. Those talks paused in spring, 2006, resumed in March, 2007, but eventually ceased in the fall of 2007, due to unresolved differences.
In 2006, Internet2 announced a partnership with Level 3 Communications to launch a brand new nationwide network, boosting its capacity from 10 Gbit/s to 100 Gbit/s. In October, 2007, Internet2 officially retired Abilene and now refers to its new, higher capacity network as the Internet2 Network
Objectives
Internet2 provides the U.S. research and education community with a network that satisfies their bandwidth-intensive requirements. The network itself is a dynamic, robust and cost-effective hybrid optical and packet network. It furnishes a 100 Gbit/s network backbone to more than 210 U.S. educational institutions, 70 corporations and 45 non-profit and government agencies.
The objectives of the Internet2 consortium are:
Developing and maintaining a leading-edge network.
Fully exploiting the capabilities of broadband connections through the use of new-generation applications.
Transferring new network services and applications to all levels of educational use, and eventually the broader Internet community.
The uses of the network span from collaborative applications, distributed research experiments, grid-based data analysis to social networking. Some of these applications are in varying levels of commercialization, such as IPv6, open-source middleware for secure network access, Layer 2 VPNs and dynamic circuit networks.
Achievements
These technologies and their organizational counterparts were not only created to make a faster alternative to the Internet. Many fields have been able to use the Abilene network to foster creativity, research, and development in a way that was not previously possible. Users of poor quality libraries can now download not only text but sound recordings, animations, videos, and other resources, which would be otherwise unavailable. Another application is the robust video conferencing now available to Internet2 participants. Neurosurgeons can now video conference with other experts in the field during an operation in a high resolution format with no apparent time lag.
INTERNET 2
Internet 2 is an advanced not-for-profit US networking consortium led by members from the research and education communities, industry, and government.
In 2009, Internet2 member rolls included over 200 higher education institutions, over 40 members from industry, over 30 research and education network and connector organizations,and over 50 affiliate members.
Internet2 operates the Internet2 Network, a next-generation Internet Protocol and optical network that delivers production network services to meet the high-performance demands of research and education, and provides a secure network testing and research environment. In late 2007, Internet2 began operating its newest dynamic circuit network, the Internet2 DCN, an advanced technology that allows user-based allocation of high-capacity data circuits over the fiber-optic network.
The Internet2 Network, through its regional network and connector members, connects over 60,000 U.S. educational, research, government and "community anchor" institutions, from primary and secondary schools to community colleges and universities, public libraries and museums to health care organizations.
The Internet2 community is actively engaged in developing and deploying emerging network technologies beyond the scope of single institutions and critical to the future of the Internet. These technologies include large-scale network performance measurement and management tools, simple and secure identity and access management tools and advanced capabilities such as the on-demand creation and scheduling of high-bandwidth, high-performance circuits.
Internet2 is member led and member focused, with an open governance structure and process. Members serve on several advisory councils, collaborate in a variety of working groups and special interest groups gather at spring and fall member meetings, and are encouraged to participate in the strategic planning process.
In 2009, Internet2 member rolls included over 200 higher education institutions, over 40 members from industry, over 30 research and education network and connector organizations,and over 50 affiliate members.
Internet2 operates the Internet2 Network, a next-generation Internet Protocol and optical network that delivers production network services to meet the high-performance demands of research and education, and provides a secure network testing and research environment. In late 2007, Internet2 began operating its newest dynamic circuit network, the Internet2 DCN, an advanced technology that allows user-based allocation of high-capacity data circuits over the fiber-optic network.
The Internet2 Network, through its regional network and connector members, connects over 60,000 U.S. educational, research, government and "community anchor" institutions, from primary and secondary schools to community colleges and universities, public libraries and museums to health care organizations.
The Internet2 community is actively engaged in developing and deploying emerging network technologies beyond the scope of single institutions and critical to the future of the Internet. These technologies include large-scale network performance measurement and management tools, simple and secure identity and access management tools and advanced capabilities such as the on-demand creation and scheduling of high-bandwidth, high-performance circuits.
Internet2 is member led and member focused, with an open governance structure and process. Members serve on several advisory councils, collaborate in a variety of working groups and special interest groups gather at spring and fall member meetings, and are encouraged to participate in the strategic planning process.
Thursday, November 10, 2011
Database Management System (DBMS)
A database management system (DBMS) is a software package with computer programs that control the creation, maintenance, and the use of a database. It allows organizations to conveniently develop databases for various applications by database administrators (DBAs) and other specialists. A database is an integrated collection of data records, files, and other database objects. A DBMS allows different user application programs to concurrently access the same database. DBMSs may use a variety of database models, such as the relational model or object model, to conveniently describe and support applications. It typically supports query languages, which are in fact high-level programming languages, dedicated database languages that considerably simplify writing database application programs. Database languages also simplify the database organization as well as retrieving and presenting information from it. A DBMS provides facilities for controlling data access, enforcing data integrity, managing concurrency control, recovering the database after failures and restoring it from backup files, as well as maintaining database security.
METCALFE`S LAW
Metcalfe's law states that the value of a telecommunications network is proportional to the square of the number of connected users of the system (n2). First formulated in this form by George Gilder in 1993, and attributed to Robert Metcalfe in regard to Ethernet, Metcalfe's law was originally presented, circa 1980, not in terms of users, but rather of "compatible communicating devices" (for example, fax machines, telephones, etc.) Only recently with the launch of the internet did this law carry over to users and networks as its original intent was to describe Ethernet purchases and connections. The law is also very much related to economics and business management, especially with competitive companies looking to merge with one another.
Metcalfe's law characterizes many of the network effects of communication technologies and networks such as the Internet, social networking, and the World Wide Web. Former Chairman of the U.S. Federal Communications Commission, Reed Hundt, said that this law gives the most understanding to the workings of the internet. Metcalfe's Law is related to the fact that the number of unique connections in a network of a number of nodes (n) can be expressed mathematically as the triangular number n(n − 1)/2, which is proportional to n2 asymptotically. Websites and blogs such as Twitter, Facebook, and Myspace are the most prominent modern example of Metcalfe's Law. Forty five percent of Americans in 2005 said the internet had played a huge role in a major decision in their life as a result of this social networking. Some of the major decisions involved buying a home, buying a car, inquiring medical help, and discovering a career. Interconnecting two networks is said to greatly exceed the power of the two separate, individual networks.
The law has often been illustrated using the example of fax machines: a single fax machine is useless, but the value of every fax machine increases with the total number of fax machines in the network, because the total number of people with whom each user may send and receive documents increases. Goods characterize the first component or intrinsic network effect. Services fall under the second component of network effects known as complementary. A social networking site works the same way as the fax machine described above. The greater number of users with the service, the more valuable the service becomes to the community. Deriving from Metcalfe's Law, every new "friend" accepted or added on these social networking sites makes the user's profile ever more valuable in terms of the law. Positive and negative outcomes take place with all network effects involving a service of this sort. New jobs, relationships, and opportunities arise with more people coming together, however, if not used correctly, services of this type can lead to distant relationships.
With so much emphasis on creating a universal communication and networking unit, little thought has been provided regarding signs of a reverse effect. As new members or consumers buy a good or service, others may leave the group to discover alternatives. With fewer users, the consumer is more of a priority to the company's success. On the other hand, with millions of people using a good or service, companies display less of a personal connection because one person is not vital to the success of the whole unit.Reverse network effects promote individualism, allowing people to not just follow the system, but almost create their own.
Limitations
Metcalfe's law is more of a heuristic or metaphor than an iron-clad empirical rule. In addition to the difficulty of quantifying the "value" of a network, the mathematical justification measures only the potential number of contacts, i.e., the technological side of a network. However the social utility of a network depends upon the number of nodes in contact. A good way to describe this is "quality versus quantity." There is a fallacious assumption and argument that all networkers present the same value as the other. This is not the case. For example, if Chinese and non-Chinese users do not understand each other, the utility of a network of users that speak the other language is near zero, and the law has to be calculated for the two compatibly communicating sub-networks separately. A barrier is created underneath the umbrella of users that oftentimes is never broken. Therefore, the mathematical equation of Metcalfe's Law posted above lies somewhere in between a linear and quadratic growth curve.
Business practicalities
With Metcalfe's Law the way it is described, all companies would theoretically combine with another partner. This would create more users involved in the company both on a consumer and supplier basis. This is not the case however. Much of the time, only companies of equal equity are willing to interconnect with one another. In the case of a larger network or business, and a smaller network or business, the larger feels the smaller one is benefiting on a much larger scale. The larger business gains little in comparison to the small company as the large has already developed a reputation whereas the small company is feeding off their previous success.
Modified models
Within the context of social networks, many, including Metcalfe himself, have proposed modified models using logarithmic and linear proportionality rather than squared proportionality. Reed and Odlyzko have sought out possible relationships to Metcalfe's Law in terms of describing the relationship of a network and one can read about how those are related.
Metcalfe's law characterizes many of the network effects of communication technologies and networks such as the Internet, social networking, and the World Wide Web. Former Chairman of the U.S. Federal Communications Commission, Reed Hundt, said that this law gives the most understanding to the workings of the internet. Metcalfe's Law is related to the fact that the number of unique connections in a network of a number of nodes (n) can be expressed mathematically as the triangular number n(n − 1)/2, which is proportional to n2 asymptotically. Websites and blogs such as Twitter, Facebook, and Myspace are the most prominent modern example of Metcalfe's Law. Forty five percent of Americans in 2005 said the internet had played a huge role in a major decision in their life as a result of this social networking. Some of the major decisions involved buying a home, buying a car, inquiring medical help, and discovering a career. Interconnecting two networks is said to greatly exceed the power of the two separate, individual networks.
The law has often been illustrated using the example of fax machines: a single fax machine is useless, but the value of every fax machine increases with the total number of fax machines in the network, because the total number of people with whom each user may send and receive documents increases. Goods characterize the first component or intrinsic network effect. Services fall under the second component of network effects known as complementary. A social networking site works the same way as the fax machine described above. The greater number of users with the service, the more valuable the service becomes to the community. Deriving from Metcalfe's Law, every new "friend" accepted or added on these social networking sites makes the user's profile ever more valuable in terms of the law. Positive and negative outcomes take place with all network effects involving a service of this sort. New jobs, relationships, and opportunities arise with more people coming together, however, if not used correctly, services of this type can lead to distant relationships.
With so much emphasis on creating a universal communication and networking unit, little thought has been provided regarding signs of a reverse effect. As new members or consumers buy a good or service, others may leave the group to discover alternatives. With fewer users, the consumer is more of a priority to the company's success. On the other hand, with millions of people using a good or service, companies display less of a personal connection because one person is not vital to the success of the whole unit.Reverse network effects promote individualism, allowing people to not just follow the system, but almost create their own.
Limitations
Metcalfe's law is more of a heuristic or metaphor than an iron-clad empirical rule. In addition to the difficulty of quantifying the "value" of a network, the mathematical justification measures only the potential number of contacts, i.e., the technological side of a network. However the social utility of a network depends upon the number of nodes in contact. A good way to describe this is "quality versus quantity." There is a fallacious assumption and argument that all networkers present the same value as the other. This is not the case. For example, if Chinese and non-Chinese users do not understand each other, the utility of a network of users that speak the other language is near zero, and the law has to be calculated for the two compatibly communicating sub-networks separately. A barrier is created underneath the umbrella of users that oftentimes is never broken. Therefore, the mathematical equation of Metcalfe's Law posted above lies somewhere in between a linear and quadratic growth curve.
Business practicalities
With Metcalfe's Law the way it is described, all companies would theoretically combine with another partner. This would create more users involved in the company both on a consumer and supplier basis. This is not the case however. Much of the time, only companies of equal equity are willing to interconnect with one another. In the case of a larger network or business, and a smaller network or business, the larger feels the smaller one is benefiting on a much larger scale. The larger business gains little in comparison to the small company as the large has already developed a reputation whereas the small company is feeding off their previous success.
Modified models
Within the context of social networks, many, including Metcalfe himself, have proposed modified models using logarithmic and linear proportionality rather than squared proportionality. Reed and Odlyzko have sought out possible relationships to Metcalfe's Law in terms of describing the relationship of a network and one can read about how those are related.
Sunday, November 6, 2011
The Genius of PC
Steven Paul Jobs born on February 24, 1955 – October 5, 2011) was an American businessman and visionary widely recognized (along with his Apple business partner Steve Wozniak) as a charismatic pioneer of the personal computer revolution. He was co-founder, chairman, and chief executive officer of Apple Inc. Jobs was co-founder and previously served as chief executive of Pixar Animation Studios; he became a member of the board of directors of the Walt Disney Company in 2006, following the acquisition of Pixar by Disney.
In the late 1970s, Apple co-founder Steve Wozniak engineered one of the first commercially successful lines of personal computers, the Apple II series. Jobs directed its aesthetic design and marketing along with A.C. "Mike" Markkula, Jr. and others.
In the early 1980s, Jobs was among the first to see the commercial potential of Xerox PARC's mouse-driven graphical user interface, which led to the creation of the Apple Lisa (engineered by Ken Rothmuller and John Couch) and, one year later, of Apple employee Jef Raskin's Macintosh. After losing a power struggle with the board of directors in 1985, Jobs left Apple and founded NeXT, a computer platform development company specializing in the higher-education and business markets.
In 1986, he acquired the computer graphics division of Lucasfilm Ltd, which was spun off as Pixar Animation Studios. He was credited in Toy Story (1995) as an executive producer. He remained CEO and majority shareholder at 50.1 percent until its acquisition by The Walt Disney Company in 2006, making Jobs Disney's largest individual shareholder at seven percent and a member of Disney's Board of Directors. Apple's 1996 buyout of NeXT brought Jobs back to the company he co-founded, and he served as its interim CEO from 1997, then becoming permanent CEO from 2000, onwards, spearheading the advent of the iMac, iTunes, iPod, iPhone, and iPad. In buying NeXT, Apple also "acquire[d] the operating system that became Mac OS X." From 2003, Jobs fought an eight-year battle with cancer, and eventually resigned as CEO in August 2011, while on his third medical leave. He was then elected chairman of Apple's board of directors.
On October 5, 2011, around 3:00 p.m., Jobs died at his home in Palo Alto, California, aged 56, six weeks after resigning as CEO of Apple. A copy of his death certificate indicated respiratory arrest as the immediate cause of death, with "metastatic pancreas neuroendocrine tumor" as the underlying cause. His occupation was listed as "entrepreneur" in the "high tech" business.
In the late 1970s, Apple co-founder Steve Wozniak engineered one of the first commercially successful lines of personal computers, the Apple II series. Jobs directed its aesthetic design and marketing along with A.C. "Mike" Markkula, Jr. and others.
In the early 1980s, Jobs was among the first to see the commercial potential of Xerox PARC's mouse-driven graphical user interface, which led to the creation of the Apple Lisa (engineered by Ken Rothmuller and John Couch) and, one year later, of Apple employee Jef Raskin's Macintosh. After losing a power struggle with the board of directors in 1985, Jobs left Apple and founded NeXT, a computer platform development company specializing in the higher-education and business markets.
In 1986, he acquired the computer graphics division of Lucasfilm Ltd, which was spun off as Pixar Animation Studios. He was credited in Toy Story (1995) as an executive producer. He remained CEO and majority shareholder at 50.1 percent until its acquisition by The Walt Disney Company in 2006, making Jobs Disney's largest individual shareholder at seven percent and a member of Disney's Board of Directors. Apple's 1996 buyout of NeXT brought Jobs back to the company he co-founded, and he served as its interim CEO from 1997, then becoming permanent CEO from 2000, onwards, spearheading the advent of the iMac, iTunes, iPod, iPhone, and iPad. In buying NeXT, Apple also "acquire[d] the operating system that became Mac OS X." From 2003, Jobs fought an eight-year battle with cancer, and eventually resigned as CEO in August 2011, while on his third medical leave. He was then elected chairman of Apple's board of directors.
On October 5, 2011, around 3:00 p.m., Jobs died at his home in Palo Alto, California, aged 56, six weeks after resigning as CEO of Apple. A copy of his death certificate indicated respiratory arrest as the immediate cause of death, with "metastatic pancreas neuroendocrine tumor" as the underlying cause. His occupation was listed as "entrepreneur" in the "high tech" business.
STEVE JOB`S BIOGRAPHY
Steve Jobs
Born Steven Paul Jobs
February 24, 1955
San Francisco, California, US
Died October 5, 2011 (aged 56)
Palo Alto, California, US
Nationality American
Occupation Co-founder, Chairman and CEO, Apple Inc., CEO, Pixar, Co-founder and CEO, NeXT Inc.
Years active 1974–2011
Net worth $7.0 billion (September 2011)[3]
Board member of The Walt Disney Company,[4] Apple Inc.
Spouse Laurene Powell (1991–2011, his death)
Children 4 – Lisa Brennan-Jobs, Reed, Erin, Eve
Relatives Mona Simpson (sister)
Born Steven Paul Jobs
February 24, 1955
San Francisco, California, US
Died October 5, 2011 (aged 56)
Palo Alto, California, US
Nationality American
Occupation Co-founder, Chairman and CEO, Apple Inc., CEO, Pixar, Co-founder and CEO, NeXT Inc.
Years active 1974–2011
Net worth $7.0 billion (September 2011)[3]
Board member of The Walt Disney Company,[4] Apple Inc.
Spouse Laurene Powell (1991–2011, his death)
Children 4 – Lisa Brennan-Jobs, Reed, Erin, Eve
Relatives Mona Simpson (sister)
Decision Support System
A Decision Support System (DSS) is a collection of integrated software applications and hardware that form the backbone of an organization’s decision making process. Companies across all industries rely on decision support tools, techniques, and models to help them assess and resolve everyday business questions. The decision support system is data-driven, as the entire process feeds off of the collection and availability of data to analyze. Business Intelligence (BI) reporting tools, processes, and methodologies are key components to any decision support system and provide end users with rich reporting, monitoring, and data analysis.
High-level Decision Support System Requirements:
Data collection from multiple sources (sales data, inventory data, supplier data, market research data. etc.)
Data formatting and collation
A suitable database location and format built for decision support -based reporting and analysis
Robust tools and applications to report, monitor, and analyze the data
Decision support systems have become critical and ubiquitous across all types of business. In today’s global marketplace, it is imperative that companies respond quickly to market changes. Companies with comprehensive decision support systems have a significant competitive advantage.
Decision Support Systems delivered by MicroStrategy Business Intelligence
Business Intelligence (BI) reporting tools, processes, and methodologies are key components to any decision support system and provide end users with rich reporting, monitoring, and data analysis. MicroStrategy provides companies with a unified reporting, analytical, and monitoring platform that forms the core of any Decision Support System. The software exemplifies all of the important characteristics of an ideal Decision Support System:
Supports individual and group decision making: MicroStrategy provides a single platform that allows all users to access the same information and access the same version of truth, while providing autonomy to individual users and development groups to design reporting content locally.
Easy to Develop and Deploy: MicroStrategy delivers an interactive, scalable platform for rapidly developing and deploying projects. Multiple projects can be created within a single shared metadata. Within each project, development teams create a wide variety of re-usable metadata objects. As decision support system deployment expands within an organization, the MicroStrategy platform effortlessly supports an increasing concurrent user base.
Comprehensive Data Access: MicroStrategy software allows users to access data from different sources concurrently, leaving organizations the freedom to choose the data warehouse that best suits their unique requirements and preferences.
Integrated software: MicroStrategy’s integrated platform enables administrators and IT professionals to develop data models, perform sophisticated analysis, generate analytical reports, and deliver these reports to end users via different channels (Web, email, file, print and mobile devices). This eliminates the need for companies to spend countless effort purchasing and integrating disparate software products in an attempt to deliver a consistent user experience.
Flexibility: MicroStrategy SDK (Software Development Kit) exposes its vast functionality through an extensive library of APIs. MicroStrategy customers can choose to leverage the power of the software’s flexible APIs to design and deploy solutions tailored to their unique business needs.
High-level Decision Support System Requirements:
Data collection from multiple sources (sales data, inventory data, supplier data, market research data. etc.)
Data formatting and collation
A suitable database location and format built for decision support -based reporting and analysis
Robust tools and applications to report, monitor, and analyze the data
Decision support systems have become critical and ubiquitous across all types of business. In today’s global marketplace, it is imperative that companies respond quickly to market changes. Companies with comprehensive decision support systems have a significant competitive advantage.
Decision Support Systems delivered by MicroStrategy Business Intelligence
Business Intelligence (BI) reporting tools, processes, and methodologies are key components to any decision support system and provide end users with rich reporting, monitoring, and data analysis. MicroStrategy provides companies with a unified reporting, analytical, and monitoring platform that forms the core of any Decision Support System. The software exemplifies all of the important characteristics of an ideal Decision Support System:
Supports individual and group decision making: MicroStrategy provides a single platform that allows all users to access the same information and access the same version of truth, while providing autonomy to individual users and development groups to design reporting content locally.
Easy to Develop and Deploy: MicroStrategy delivers an interactive, scalable platform for rapidly developing and deploying projects. Multiple projects can be created within a single shared metadata. Within each project, development teams create a wide variety of re-usable metadata objects. As decision support system deployment expands within an organization, the MicroStrategy platform effortlessly supports an increasing concurrent user base.
Comprehensive Data Access: MicroStrategy software allows users to access data from different sources concurrently, leaving organizations the freedom to choose the data warehouse that best suits their unique requirements and preferences.
Integrated software: MicroStrategy’s integrated platform enables administrators and IT professionals to develop data models, perform sophisticated analysis, generate analytical reports, and deliver these reports to end users via different channels (Web, email, file, print and mobile devices). This eliminates the need for companies to spend countless effort purchasing and integrating disparate software products in an attempt to deliver a consistent user experience.
Flexibility: MicroStrategy SDK (Software Development Kit) exposes its vast functionality through an extensive library of APIs. MicroStrategy customers can choose to leverage the power of the software’s flexible APIs to design and deploy solutions tailored to their unique business needs.
Wednesday, November 2, 2011
The advantages of IT..
The advantages of IT are many. True globalization has come about only via this automated system. The creation of one interdependent system helps us to share information and end linguistic barriers across the continents. The collapse of geographic boundaries has made the world a 'global village'. The technology has not only made communication cheaper, but also possible much quicker and 24x7. The wonders of text messages, email and auto-response, backed by computer security applications, have opened up scope for direct communication.
Computerized, internet business processes have made many businesses turn to the Internet for increased productivity, greater profitability, clutter free working conditions and global clientèle. It is mainly due to the IT industry that people from diverse cultures are able to personally communicate and exchange valuable ideas. This has greatly reduced prejudice and increased sensitivity. Businesses are able to operate 24x7, even from remote locations.
Information technology has rippled on in the form of a Communication Revolution. Specialists in this field like programmers, analyzers and developers are able to further the applications and improve business processes simultaneously. The management infrastructure thus generated defies all boundaries. Among the many advantages of the industry are technical support post-implementation, network and individual desktop management, dedicated business applications and strategic planning for enhanced profitability and effective project management.
IT provides a number of low-cost business options to tap higher productivity with dedicated small business CRM and a special category for the larger operations. Regular upgrades have enabled many businessmen to increase productivity and identify a market niche that would never have been possible without the connectivity. With every subsequent increase in the ROI or Return On Investment, businesses are able to remain buoyant even amidst the economic recession. Not only do people connect faster with the help of information technology, but they are also able to identify like-minded individuals and extend help, while strengthening ties.
This segment revolves around automated processes that require little or no human intervention at all. This in turn has minimized job stress levels at the work place and eliminated repetition of tasks, loss due to human error, risks involved due to negligence of timely upgrades and extensive paper-intensive business applications that result in the accumulation of unnecessary bulk. The sophistication of the modern work stations and general working conditions is possible only due to the development of Information Technology.
By Gaynor Borade
Computerized, internet business processes have made many businesses turn to the Internet for increased productivity, greater profitability, clutter free working conditions and global clientèle. It is mainly due to the IT industry that people from diverse cultures are able to personally communicate and exchange valuable ideas. This has greatly reduced prejudice and increased sensitivity. Businesses are able to operate 24x7, even from remote locations.
Information technology has rippled on in the form of a Communication Revolution. Specialists in this field like programmers, analyzers and developers are able to further the applications and improve business processes simultaneously. The management infrastructure thus generated defies all boundaries. Among the many advantages of the industry are technical support post-implementation, network and individual desktop management, dedicated business applications and strategic planning for enhanced profitability and effective project management.
IT provides a number of low-cost business options to tap higher productivity with dedicated small business CRM and a special category for the larger operations. Regular upgrades have enabled many businessmen to increase productivity and identify a market niche that would never have been possible without the connectivity. With every subsequent increase in the ROI or Return On Investment, businesses are able to remain buoyant even amidst the economic recession. Not only do people connect faster with the help of information technology, but they are also able to identify like-minded individuals and extend help, while strengthening ties.
This segment revolves around automated processes that require little or no human intervention at all. This in turn has minimized job stress levels at the work place and eliminated repetition of tasks, loss due to human error, risks involved due to negligence of timely upgrades and extensive paper-intensive business applications that result in the accumulation of unnecessary bulk. The sophistication of the modern work stations and general working conditions is possible only due to the development of Information Technology.
By Gaynor Borade
Subscribe to:
Comments (Atom)