วันจันทร์ที่ 16 กันยายน พ.ศ. 2556

GOOGLE EARTH

my home 66/6 District Bangtatan Prefecture Songpenong Suphanburi



 









History of the Internet

History of the Internet

The history of the Internet began with the development of electronic computers in the 1950s. The public was first introduced to the concepts that would lead to the Internet when a message was sent over the ARPANet from computer science Professor Leonard Kleinrock's laboratory at University of California, Los Angeles (UCLA), after the second piece of network equipment was installed at Stanford Research Institute (SRI). Packet switched networks such as ARPANET, Mark I at NPL in the UK, CYCLADES, Merit Network, Tymnet, and Telenet, were developed in the late 1960s and early 1970s using a variety of protocols. The ARPANET in particular led to the development of protocols for internetworking, in which multiple separate networks could be joined together into a network of networks.
In 1982, the Internet protocol suite (TCP/IP) was standardized, and consequently, the concept of a world-wide network of interconnected TCP/IP networks, called the Internet, was introduced. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) developed the Computer Science Network (CSNET) and again in 1986 when NSFNET provided access to supercomputer sites in the United States from research and education organizations. Commercial Internet service providers (ISPs) began to emerge in the late 1980s and early 1990s. The ARPANET was decommissioned in 1990. The Internet was commercialized in 1995 when NSFNET was decommissioned, removing the last restrictions on the use of the Internet to carry commercial traffic.
Since the mid-1990s, the Internet has had a revolutionary impact on culture and commerce, including the rise of near-instant communication by electronic mail, instant messaging, Voice over Internet Protocol (VoIP) "phone calls", two-way interactive video calls, and the World Wide Web with its discussion forums, blogs, social networking, and online shopping sites. The research and education community continues to develop and use advanced networks such as NSF's very high speed Backbone Network Service (vBNS), Internet2, and National LambdaRail. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more. The Internet's takeover over the global communication landscape was almost instant in historical terms: it only communicated 1% of the information flowing through two-way telecommunications networks in the year 1993, already 51% by 2000, and more than 97% of the telecommunicated information by 2007.[1] Today the Internet continues to grow, driven by ever greater amounts of online information, commerce, entertainment, and social networking.

Precursors

The telegraph system is the first fully digital communication system. Thus the Internet has precursors, such as the telegraph system, that date back to the 19th century, more than a century before the digital Internet became widely used in the second half of the 1990s. The concept of data communication– transmitting data between two different places, connected via some kind of electromagnetic medium, such as radio or an electrical wire – predates the introduction of the first computers. Such communication systems were typically limited to point to point communication between two end devices.Telegraph systems and telex machines can be considered early precursors of this kind of communication.
Fundamental theoretical work in data transmission and information theory was developed by Claude ShannonHarry Nyquist, and Ralph Hartley, during the early 20th century.
Early computers used the technology available at the time to allow communication between the central processing unit and remote terminals. As the technology evolved, new systems were devised to allow communication over longer distances (for terminals) or with higher speed (for interconnection of local devices) that were necessary for the mainframe computer model. Using these technologies made it possible to exchange data (such as files) between remote computers. However, the point to point communication model was limited, as it did not allow for direct communication between any two arbitrary systems; a physical link was necessary. The technology was also deemed as inherently unsafe for strategic and military use, because there were no alternative paths for the communication in case of an enemy attack.

Three terminals and an ARPA

A fundamental pioneer in the call for a global network, J. C. R. Licklider, articulated the ideas in his January 1960 paper, Man-Computer Symbiosis.
"A network of such [computers], connected to one another by wide-band communication lines [which provided] the functions of present-day libraries together with anticipated advances in information storage and retrieval and [other] symbiotic functions."
—J.C.R. Licklider, [2]
In August 1962, Licklider and Welden Clark published the paper "On-Line Man Computer Communication", which was one of the first descriptions of a networked future.
In October 1962, Licklider was hired by Jack Ruina as Director of the newly established Information Processing Techniques Office (IPTO) within DARPA, with a mandate to interconnect the United States Department of Defense's main computers at Cheyenne Mountain, the Pentagon, and SAC HQ. There he formed an informal group within DARPA to further computer research. He began by writing memos describing a distributed network to the IPTO staff, whom he called "Members and Affiliates of the Intergalactic Computer Network". As part of the information processing office's role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at the University of California, Berkeley and one for the Compatible Time-Sharing System project at the Massachusetts Institute of Technology (MIT). Licklider's identified need for inter-networking would be made obvious by the apparent waste of resources this caused.
"For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them. [...] I said, it's obvious what to do (But I don't want to do it): If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet."
Robert W. Taylor, co-writer with Licklider of "The Computer as a Communications Device", in an interview with The New York Times[3]
Although he left the IPTO in 1964, five years before the ARPANET went live, it was his vision of universal networking that provided the impetus that led his successors such as Lawrence Roberts and Robert Taylor to further the ARPANET development. Licklider later returned to lead the IPTO in 1973 for two years.

Packet switching

At the tip of the problem lay the issue of connecting separate physical networks to form one logical network. During the 1960s, Paul Baran (RAND Corporation) produced a study of survivable networks for the US military. Information transmitted across Baran's network would be divided into what he called 'message-blocks'. Independently, Donald Davies (National Physical Laboratory, UK), proposed and developed a similar network based on what he called packet-switching, the term that would ultimately be adopted. Leonard Kleinrock (MIT) developed a mathematical theory behind this technology. Packet-switching provides better bandwidth utilization and response times than the traditional circuit-switching technology used for telephony, particularly on resource-limited interconnection links.[6]
Packet switching is a rapid store and forward networking design that divides messages up into arbitrary packets, with routing decisions made per-packet. Early networks used message switched systems that required rigid routing structures prone to single point of failure. This led Tommy Krash and Paul Baran's U.S. military funded research to focus on using message-blocks to include network redundancy.[7] The widespread urban legend that the Internet was designed to resist a nuclear attack likely arose as a result of Baran's earlier work on packet switching, which did focus on redundancy in the face of a nuclear "holocaust.
By December 5, 1969, a 4-node network was connected by adding the University of Utah and the University of California, Santa Barbara. Building on ideas developed in ALOHAnet, the ARPANET grew rapidly. By 1981, the number of hosts had grown to 213, with a new host being added approximately every twenty days.[11][12]
ARPANET development was centered around the Request for Comments (RFC) process, still used today for proposing and distributing Internet Protocols and Systems. RFC 1, entitled "Host Software", was written bySteve Crocker from the University of California, Los Angeles, and published on April 7, 1969. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.
ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used. The early ARPANET used the Network Control Program (NCP, sometimes Network Control Protocol) rather than TCP/IP. On January 1, 1983, known as flag day, NCP on the ARPANET was replaced by the more flexible and powerful family of TCP/IP protocols, marking the start of the modern Internet.[13]
International collaborations on ARPANET were sparse. For various political reasons, European developers were concerned with developing the X.25networks. Notable exceptions were the Norwegian Seismic Array (NORSAR) in 1972, followed in 1973 by Sweden with satellite links to the Tanum Earth Station and Peter Kirstein's research group in the UK, initially at the Institute of Computer Science, London University and later at University College London.[14]

NPL

In 1965, Donald Davies of the National Physical Laboratory (United Kingdom) proposed a national data network based on packet-switching. The proposal was not taken up nationally, but by 1970 he had designed and built the Mark I packet-switched network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions.[15] By 1976 12 computers and 75 terminal devices were attached and more were added until the network was replaced in 1986.

Merit Network

The Merit Network[16] was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development.[17] With initial support from the State of Michiganand the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host to host connection was made between the IBM mainframe computer systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit.[18]In October 1972 connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networksX.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the network.[18][19] All of this set the stage for Merit's role in the NSFNET project starting in the mid-1980s.

CYCLADES

The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. First demonstrated in 1973, it was developed to explore alternatives to the initial ARPANET design and to support network research generally. It was the first network to make the hosts responsible for the reliable delivery of data, rather than the network itself, using unreliable datagrams and associated end-to-end protocol mechanisms.

Google Chrome

Google Chrome

Google Chrome
Google Chrome icon (2011).svg
Google Chrome screenshot.png
Google Chrome 


Google Chrome is a freeware web browser[7] developed by Google. It used the WebKit layout engine until version 27 and, with the exception of its iOS releases, from version 28 and beyond uses the WebKit fork Blink.[8][9][10] It was first released as a beta version forMicrosoft Windows on September 2, 2008, and as a stable public release on December 11, 2008.
Net Applications has indicated that Chrome is the third-most popular web browser when it comes to the size of its user base, behindInternet Explorer and Firefox.[11] StatCounter, however, estimates that Google Chrome has a 39% worldwide usage share of web browsersmaking it the most widely used web browser in the world.[12]
In September 2008, Google released the majority of Chrome's source code as an open source project called Chromium,[13][14] on which Chrome releases are still based. Notable components that are not open source are the built in PDF viewer and the built in Flash player.

History

Google's Eric Schmidt opposed the development of an independent web browser for six years. He stated that "at the time, Google was a small company," and he did not want to go through "bruising browser wars." After co-founders Sergey Brin and Larry Page hired several Mozilla Firefox developers and built a demonstration of Chrome, however, Schmidt admitted that "It was so good that it essentially forced me to change my mind.

Announcement

The release announcement was originally scheduled for September 3, 2008, and a comic by Scott McCloud was to be sent to journalists and bloggers explaining the features within the new browser.[16] Copies intended for Europe were shipped early and German blogger Philipp Lenssen of Google Blogoscoped[17] made a scanned copy of the 38-page comic available on his website after receiving it on September 1, 2008.[18] Google subsequently made the comic available on Google Books[19] and mentioned it on their official blog along with an explanation for the early release

Public release

The browser was first publicly released for Microsoft Windows (XP and later versions) on September 2, 2008 in 43 languages, officially a beta version.[21]
On the same day, a CNET news item[22] drew attention to a passage in the Terms of Service statement for the initial beta release, which seemed to grant to Google a license to all content transferred via the Chrome browser. This passage was inherited from the general Google terms of service.[23]Google responded to this criticism immediately by stating that the language used was borrowed from other products, and removed this passage from the Terms of Service.[7]
Chrome quickly gained about 1% usage share.[20][24][25][26] After the initial surge, usage share dropped until it hit a low of 0.69% in October 2008. It then started rising again and by December 2008, Chrome again passed the 1% threshold.[27]
In early January 2009, CNET reported that Google planned to release versions of Chrome for OS X and Linux in the first half of the year.[28] The first official Chrome OS X and Linux developer previews[29] were announced on June 4, 2009 with a blog post[30] saying they were missing many features and were intended for early feedback rather than general use.
In December 2009, Google released beta versions of Chrome for OS X and Linux.[31][32] Google Chrome 5.0, announced on May 25, 2010, was the first stable release to support all three platforms.[33]
Chrome was one of the twelve browsers offered to European Economic Area users of Microsoft Windows in 2010

Development

Chrome was assembled from 25 different code libraries from Google and third parties such as Mozilla's Netscape Portable RuntimeNetwork Security ServicesNPAPISkia Graphics Engine,SQLite, and a number of other open-source projects.[35] The V8 JavaScript virtual machine was considered a sufficiently important project to be split off (as was Adobe/Mozilla's Tamarin) and handled by a separate team in Denmark coordinated by Lars Bak at Aarhus. According to Google, existing implementations were designed "for small programs, where the performance and interactivity of the system weren't that important", but web applications such as Gmail "are using the web browser to the fullest when it comes to DOM manipulations and JavaScript", and therefore would significantly benefit from a JavaScript engine that could work faster.
Chrome uses the Blink rendering engine to display web pages. Based on WebKit 2, Blink only uses WebKit's "WebCore" components while substituting all other components, such as its own multi-process architecture in place of WebKit's native implementation.[36]
Chrome is internally tested with unit testing, "automated user interface testing of scripted user actions", fuzz testing, as well as WebKit's layout tests (99% of which Chrome is claimed to have passed), and against commonly accessed websites inside the Google index within 20–30 minutes.[19]
Google created Gears for Chrome, which added features for web developers typically relating to the building of web applications, including offline support.[19] However, Google phased out Gears in favor of HTML5.[37]
On January 11, 2011 the Chrome product manager, Mike Jazayeri, announced that Chrome would remove H.264 video codec support for its HTML5 player, citing the desire to bring Google Chrome more in line with the currently available open codecs available in the Chromium project, which Chrome is based on.[38] Despite this, on November 6, 2012, Google released a version of Chrome on Windows which added hardware-accelerated H.264 video decoding.[39] As of January 2013, there has been no further announcement about the future of Chrome H.264 support.
On February 7, 2012, Google launched Google Chrome Beta for Android 4.0 devices.[40] On many new devices with Android 4.1 and later preinstalled, Chrome is the default browser.[41]
On April 3, 2013, Google announced that it would fork the WebCore component of WebKit to form its own layout engine known as Blink. The aim of Blink will be to give Chrome's developers more freedom in implementing its own changes to the engine, and to allow its codebase to be trimmed of code that is unnecessary or unimplemented by Chrome.

Enterprise deployment

In December 2010 Google announced that to make it easier for businesses to use Chrome they would provide an official Chrome MSI package. For business use it is helpful to have full-fledged MSI packages that can be customized via transform files (.mst) - but the MSI provided with Chrome is only a very limited MSI wrapper fitted around the normal installer, and many businesses find that this arrangement does not meet their needs.[42] The normal downloaded Chrome installer puts the browser in the user's local app data directory and provides invisible background updates, but the MSI package will allow installation at the system level, providing system administrators control over the update process[43] — it was formerly possible only when Chrome was installed usingGoogle Pack. Google also created group policy objects to fine tune the behavior of Chrome in the business environment, for example setting automatic updates interval, disable auto-updates, a home page and to workaround their basic Windows design flaws and bugs if it comes to roaming profiles support, etc.[44] Until version 24 the software is known not to be ready for enterprise deployments with roaming profiles or Terminal Server/Citrix environments

Chromium

In September 2008, Google released a large portion of Chrome's source code as an open source project called Chromium. This move enabled third-party developers to study the underlying source code and to help port the browser to the OS X and Linux operating systems. The Google-authored portion of Chromium is released under the permissive BSD license.[46] Other portions of the source code are subject to a variety of open source licenses.[47] Chromium is similar to Chrome, but lacks built-in automatic updates, built-in PDF reader and built-in Flash player, as well as Google branding and has a blue-colored logo instead of the multicolored Google logo.[48][49] Chromium does not implement user RLZ tracking.

Blogger

Logo Blogger



Blogger is a blog-publishing service that allows private or multi-user blogs with time-stamped entries. It was developed by Pyra Labs, which was bought by Google in 2003. Generally, the blogs are hosted by Google at a subdomain of blogspot.com. Up until May 1, 2010 Blogger allowed users to publish blogs on other hosts, via FTP. All such blogs had (or still have) to be moved to Google's own servers, with domains other than blogspot.com allowed via custom URLs.[3] Unlike Wordpress, Blogger cannot be installed in a web server. One has to use DNS facilities to redirect a blogspot domain to a custom URL.[4]
History

On August 23, 1999, Blogger was launched by Pyra Labs. As one of the earliest dedicated blog-publishing tools, it is credited for helping popularize the format. In February 2003, Pyra Labs was acquired by Google under undisclosed terms. The acquisition allowed premium features (for which Pyra had charged) to become free. In October 2004, Pyra Labs' co-founder, Evan Williams, left Google. In 2004, Google purchased Picasa; it integrated Picasa and its photo sharing utility Hello into Blogger, allowing users to post photos to their blogs.
On May 9, 2004, Blogger introduced a major redesign, adding features such as web standards-compliant templates, individual archive pages for posts, comments, and posting by email. On August 14, 2006, Blogger launched its latest version in beta, codenamed "Invader", alongside the gold release. This migrated users to Google servers and had some new features, including interface language in French, Italian, German and Spanish.[5] In December 2006, this new version of Blogger was taken out of beta. By May 2007, Blogger had completely moved over to Google operated servers. Blogger was ranked 16 on the list of top 50 domains in terms of number of unique visitors in 2007.
Redesign
As part of the Blogger redesign in 2006, all blogs associated with a user's Google Account were migrated to Google servers. Blogger claims that the service is now more reliable because of the quality of the servers.[7]
Along with the migration to Google servers, several new features were introduced, including label organization, a drag-and-drop template editing interface, reading permissions (to create private blogs) and new Web feed options. Furthermore, blogs are updated dynamically, as opposed to rewriting HTML files.
In a version of the service called Blogger in Draft,[8] new features are tested before being released to all users. New features are discussed in the service's official blog.[9]


Screenshot of blog post compose window of Blogger, April 2012
In September 2009, Google introduced new features into Blogger as part of its tenth anniversary celebration. The features included a new interface for post editing, improved image handling, Raw HTML Conversion, and other Google Docs-based implementations, including:
Adding location to posts via geotagging.
Post time-stamping at publication, not at original creation.
Vertical re-sizing of the post editor. The size is saved in a per-user, per-blog preference.
Link editing in Compose mode.
Full Safari 3 support and fidelity on both Windows and Mac OS.
New Preview dialog that shows posts in a width and font size approximating what is seen in the published view.
Placeholder image for tags so that embeds are movable in Compose mode.
New toolbar with Google aesthetics, faster loading time, and "undo" and "redo" buttons. Also added was the full justification button, a strike-through button, and an expanded color palette.
In 2010, Blogger introduced new templates and redesigned its website. The new post editor was criticized for being less reliable than its predecessor.
Available languages
Blogger is available in these languages: Arabic, Bengali, Indonesia, Bulgarian, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Filipino, Finnish, French, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Kannada, Korean, Latvian, Lithuanian, Malay, Malayalam, Marathi, Norwegian, Oriya, Persian, Polish, Portuguese (Brazil), Portuguese (Portugal), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Tamil, Telugu, Thai, Turkish, Ukrainian, Vietnamese. Nepali, Farashi
Country specific blogger addresses
Starting in February 2013 Blogger began integrating user blogs with multiple country specific URL addresses i.e. exampleuserblogname.blogspot.com would be automatically redirected to exampleuserblogname.blogspot.ca in Canada, exampleuserblogname.blogspot.co.uk in the United Kingdom etc. Blogger explained that by doing this they could manage the blog content more locally so if there was any objectionable material that violated a particular country's laws they could remove and block access to that blog for that country through the assigned ccTLD while retaining access through other ccTLD addresses and the default BlogSpot.com url. However it should be noted that if a blog using a country specific URL was removed it is still technically possible to still access the blog through Google's No Country Redirect override by entering the URL address using the regular BlogSpot.com address and adding /ncr after the .com

Integration


    • The Google Toolbar has a feature called "BlogThis!" which allows toolbar users with Blogger accounts to post links directly to their blogs.
    • "Blogger for Word" is an add-in for Microsoft Word which allows users to save a Microsoft Word Document directly to a Blogger blog, as well as edit their posts both on- and offline. As of January 2007, Google says "Blogger for Word is not currently compatible with the new version of Blogger", and they state no decision has been made about supporting it with the new Blogger.[14] However, Microsoft Office 2007 adds native support for a variety of blogging systems, including Blogger.
    • Blogger supports Google's AdSense service as a way of generating revenue from running a blog.
    • Blogger also started integration with Amazon Associates in December 2009, as a service to generate revenue.[15] It was not publically announced, but by September 2011 it appeared that all integration options had been removed and that the partnership had ended.[16]
    • Blogger offers multiple author support, making it possible to establish group blogs.
    • Blogger offers a template editing feature, which allows users to customize the Blogger template.
    • Windows Live Writer, a standalone app of the Windows Live suite, publishes directly to Blogger.
    • Blogger Private Blog features: Blogger offers a private blog that can be only access by author or invited readers /authors or admins only .
    • Blogger can be optionally integrated with Google+.
    • Google+ comments can be integrated with blogger comments.