METABLOG EBOOKS FROM GOOGLEBOOKS

METABLOG EBOOKS FROM GOOGLEBOOKS
FIND E-BOOKS HERE !
Showing posts with label search engine. Show all posts
Showing posts with label search engine. Show all posts

Tuesday

Indexing system


Hypertext, made famous by the World Wide Web, is most simply a way of constructing documents that reference other documents. Within a hypertext document, a block of text can be tagged as a hypertext link pointing to another document. When viewed with a hypertext browser, the link can be activated to view the other document. Of course, if you're reading this document, you're already familiar with the concept.

Hypertext's original idea was to take advantage of electronic data processing to organize large quantities of information that would otherwise overwhelm a reader. Two hundred years ago, the printing press made possible a similar innovation - the encyclopedia. Hypertext's older cousin combined topical articles with an indexing system to afford the researcher one or perhaps two orders of magnitude increase in the volume of accessible information. Early experience with hypertext suggests that it may ultimately yield an additional order of magnitude increase, by making directly accessible information that would otherwise be relegated to a bibliography. Hypertext's limiting factor appears not to be the physical size of some books, but rather the ability of the reader to navigate increasingly complex search structures. Currently, additional increases in human information processing ability seem tied to developing more sophisticated automated search tools, though the present technology presents possibilities that remain far from fully explored.

Augmenting basic hypertext with graphics, more complex user input fields and dynamically generated documents adds considerable power and flexibility to this concept. Hypertext, though still useful for its original goal of organizing large quantities of information, becomes a simple, general purpose user interface that fits neatly into the increasingly popular client-server model. It does not seem difficult to image a day when restaurant orders, for example, will be taken using a hand-held hypertext terminal, relayed directly to the kitchen for preparation, and simultaneously logged to a database for later analysis by management.


Characteristics of good hypertext


The flexibility of hypertext gives free range to the author's creativity, but good hypertext appears to have some common characteristics:


Lots of documents. Much of the hypertext's power comes from its ability to make large quantities of information accessible. If all the text in your system can be printed on ten pages, it would be just as simple to read through it from beginning to end and forget all this hypertext silliness. On the other hand, if there are ten million pages of text in your system, then someone could follows a link on atomic energy and ultimately hope to find a description of the U-238 decay process.

Lots of links. If each document has just one link, then it is little more than normal, sequential text. A hypertext document should present the reader with several links, offering a choice about where to go next. Ideally, a document should present as many relevant links as the reader can easily comprehend and select among.

Range of detail. The great advantage of hypertext is that it permits readers to explore to a breadth and depth that is simply not feasible in print. To make this accessible, available hypertext documents should range from the broadest possible overview of a subject, down to its gritty details. Imagine the Encyclopedia Britannica, all thirty-odd volumes of it, searchable online and with each article possessing links to a half dozen reference documents with even more detailed subject coverage. This is the potential of hypertext.

Correct links. This may seem trivial, but it's amazing how many Web links point nowhere. In general, be careful linking to any hypertext document not under your direct control. Can you count on it to be there later?

Guidelines for a hypertext reference system


Hypertext seems best suited for reference material, so here are my suggested guidelines for creating hypertext reference systems, with the Internet Encyclopedia offered as an example:


Reference documents. Start by assembling a good set of core reference material. In the Encyclopedia's case, the RFCs that document standard Internet protocols form this core. Ideally, the reference core should consist of extremely detailed documents, offering the highest possible degree of completeness. A general reference work on physics might start with a large collection of scientific papers.

Topical articles. Augment the core reference material with articles at a broader level of detail. Systematic organization of these articles, perhaps using an outline as a framework, is essential to making them accessible to the reader. The articles should be focused, and short enough to be easily digestible in one piece.

Search engine. A good search engine is invaluable for any large collection of documents. The Internet Encyclopedia uses a search model based on searching outward from a particular page, in order to facilitate both topical and keyword searches.

Extras. These can include graphics, audio and video clips, problems and exercises, student courses, simulations, sample programs, ordering forms, database tables, and revision histories, to name a few.

Sunday

Where You can find the best Ebooks on the Web



1. Free French Ebooks

2. Free English Ebooks

3. French Ebooks

4. English Ebooks

5. Main Ebooks Search Engine




6. Publishers




Thanks to ebouquin for the list (i completed)

Wednesday

Understanding your questions


The biggest internet revolution for a generation will be unveiled this month with the launch of software that will understand questions and give specific, tailored answers in a way that the web has never managed before.

The new system, Wolfram Alpha, showcased at Harvard University in the US last week, takes the first step towards what many consider to be the internet's Holy Grail – a global store of information that understands and responds to ordinary language in the same way a person does.

Although the system is still new, it has already produced massive interest and excitement among technology pundits and internet watchers.

Computer experts believe the new search engine will be an evolutionary leap in the development of the internet. Nova Spivack, an internet and computer expert, said that Wolfram Alpha could prove just as important as Google. "It is really impressive and significant," he wrote. "In fact it may be as important for the web (and the world) as Google, but for a different purpose.

Tom Simpson, of the blog Convergenceofeverything.com, said: "What are the wider implications exactly? A new paradigm for using computers and the web? Probably. Emerging artificial intelligence and a step towards a self-organising internet? Possibly... I think this could be big."

Wolfram Alpha will not only give a straight answer to questions such as "how high is Mount Everest?", but it will also produce a neat page of related information – all properly sourced – such as geographical location and nearby towns, and other mountains, complete with graphs and charts.

The real innovation, however, is in its ability to work things out "on the fly", according to its British inventor, Dr Stephen Wolfram. If you ask it to compare the height of Mount Everest to the length of the Golden Gate Bridge, it will tell you. Or ask what the weather was like in London on the day John F Kennedy was assassinated, it will cross-check and provide the answer. Ask it about D sharp major, it will play the scale. Type in "10 flips for four heads" and it will guess that you need to know the probability of coin-tossing. If you want to know when the next solar eclipse over Chicago is, or the exact current location of the International Space Station, it can work it out.

Dr Wolfram, an award-winning physicist who is based in America, added that the information is "curated", meaning it is assessed first by experts. This means that the weaknesses of sites such as Wikipedia, where doubts are cast on the information because anyone can contribute, are taken out. It is based on his best-selling Mathematica software, a standard tool for scientists, engineers and academics for crunching complex maths.

"I've wanted to make the knowledge we've accumulated in our civilisation computable," he said last week. "I was not sure it was possible. I'm a little surprised it worked out so well."

Dr Wolfram, 49, who was educated at Eton and had completed his PhD in particle physics by the time he was 20, added that the launch of Wolfram Alpha later this month would be just the beginning of the project.

"It will understand what you are talking about," he said. "We are just at the beginning. I think we've got a reasonable start on 90 per cent of the shelves in a typical reference library."

The engine, which will be free to use, works by drawing on the knowledge on the internet, as well as private databases. Dr Wolfram said he expected that about 1,000 people would be needed to keep its databases updated with the latest discoveries and information.

He also added that he would not go down the road of storing information on ordinary people, although he was aware that others might use the technology to do so.

Wolfram Alpha has been designed with professionals and academics in mind, so its grasp of popular culture is, at the moment, comparatively poor. The term "50 Cent" caused "absolute horror" in tests, for example, because it confused a discussion on currency with the American rap artist. For this reason alone it is unlikely to provide an immediate threat to Google, which is working on a similar type of search engine, a version of which it launched last week.

"We have a certain amount of popular culture information," Dr Wolfram said. "In some senses popular culture information is much more shallowly computable, so we can find out who's related to who and how tall people are. I fully expect we will have lots of popular culture information. There are linguistic horrors because if you put in books and music a lot of the names clash with other concepts."

He added that to help with that Wolfram Alpha would be using Wikipedia's popularity index to decide what users were likely to be interested in.

With Google now one of the world's top brands, worth $100bn, Wolfram Alpha has the potential to become one of the biggest names on the planet.

Dr Wolfram, however, did not rule out working with Google in the future, as well as Wikipedia. "We're working to partner with all possible organisations that make sense," he said. "Search, narrative, news are complementary to what we have. Hopefully there will be some great synergies."

What the experts say

"For those of us tired of hundreds of pages of results that do not really have a lot to do with what we are trying to find out, Wolfram Alpha may be what we have been waiting for."

Michael W Jones, Tech.blorge.com

"If it is not gobbled up by one of the industry superpowers, his company may well grow to become one of them in a small number of years, with most of us setting our default browser to be Wolfram Alpha."

Doug Lenat, Semanticuniverse.com

"It's like plugging into an electric brain."

Matt Marshall, Venturebeat.com

"This is like a Holy Grail... the ability to look inside data sources that can't easily be crawled and provide answers from them."

Danny Sullivan, editor-in-chief of searchengineland.com

Well, I've signed up for their email newsletter. We'll see what develops.







Download ebooks on http://www.frenchtheory.com/ - See that post with different algorithms in metabole - See the journal French Metablog with today different posts-Enter Jean-Philippe Pastor


Delicious Bookmark this on Delicious

Saturday

New Internet


Via The Independent in the UK, a report that's high on hyperventilation but still interesting: An invention that could change the internet for ever. Excerpt:


The biggest internet revolution for a generation will be unveiled this month with the launch of software that will understand questions and give specific, tailored answers in a way that the web has never managed before.


The new system, Wolfram Alpha, showcased at Harvard University in the US last week, takes the first step towards what many consider to be the internet's Holy Grail – a global store of information that understands and responds to ordinary language in the same way a person does.


Although the system is still new, it has already produced massive interest and excitement among technology pundits and internet watchers. Computer experts believe the new search engine will be an evolutionary leap in the development of the internet.


Nova Spivack, an internet and computer expert, said that Wolfram Alpha could prove just as important as Google. "It is really impressive and significant," he wrote. "In fact it may be as important for the web (and the world) as Google, but for a different purpose.


Tom Simpson, of the blog Convergenceofeverything.com, said: "What are the wider implications exactly? A new paradigm for using computers and the web? Probably. Emerging artificial intelligence and a step towards a self-organising internet? Possibly... I think this could be big."
Wolfram Alpha will not only give a straight answer to questions such as "how high is Mount Everest?", but it will also produce a neat page of related information – all properly sourced – such as geographical location and nearby towns, and other mountains, complete with graphs and charts.


The real innovation, however, is in its ability to work things out "on the fly", according to its British inventor, Dr Stephen Wolfram. If you ask it to compare the height of Mount Everest to the length of the Golden Gate Bridge, it will tell you. Or ask what the weather was like in London on the day John F Kennedy was assassinated, it will cross-check and provide the answer.
Ask it about D sharp major, it will play the scale. Type in "10 flips for four heads" and it will guess that you need to know the probability of coin-tossing. If you want to know when the next solar eclipse over Chicago is, or the exact current location of the International Space Station, it can work it out.


Well, I've signed up for their email newsletter. We'll see what develops.



Download ebooks on http://www.frenchtheory.com/ - See that post with different algorithms in metabole - See the journal French Metablog with today different posts-Enter Jean-Philippe Pastor


Delicious Bookmark this on Delicious

Wednesday

Unexpected relationships with a machine


An immediate goal of text data mining is to construct synopses of the hypertextual material: summaries of the topics which are covered by the texts in data base according to criteria defined by the scriptor.


Another goal is to identify salient points: concise lists of different topics, if possible in order of importance, adjustable in depth.An important goal is also taxonomy (keywords in the data base): determination of the topics in the documents which are (or should be) of interest to the reader.


This is to be followed by classification: the grouping of documents containing different topics, either as defined by the scriptor, or as defined by the information content. The most valuable help for the hypertextual reader consists of the identification of dependencies of the different topics on each other, especially of unexpected relationships.


Download ebooks on http://www.frenchtheory.com/ - See that post with different algorithms in metabole - See the journal French Metablog with today different posts -




Sunday

Semantic cluster graphs


Makers of Cluuz.com, a semantic search engine attemps to get users to rethink the way they find information online.


Mr. Frank surmises, ''What consumers want is to be able to find information faster, and they want clues to help them find their way to that information faster. What we're doing is quite unique and nobody can do what we're doing.'' According to Mr. Frank, search engines such as Cluuz use the science of semantics - the study of meaning in language - to produce more relevant searches.While the site works as a stand-alone search engine, it could also work if it were rolled into existing offerings at Google, Yahoo or Microsoft's MSN.

Cluuz also shows the connections between various Web documents and sites based not on actual links between pages but on the information on the pages through a technology it calls semantic cluster graphs. It displays these results in a visual format that resembles a spider's web !
Download ebooks on http://www.frenchtheory.com/ - See that post with different algorithms in metabole - See the journal French Metablog with today different posts - PHONEREADER Libray - - Jean-Philippe Pastor


Semantic web technology


Aggregator sites, such as Dogpile, which check several engines at the same time, could weaken Google's dominance.


They can be very helpful because search engines index their results in different ways. Semantic search engines, which focus on the meaning of your search rather than just the keyword, could be the next big thing in search, according to Ben Camm-Jones, news editor of Web User magazine. If there's going to be anything, it will be semantic web technology that overtakes Google - if it's a really compelling proposition, and if, somehow, we can shake people out of this belief that Google is the only way to find information on the web.

Microsoft have invested in this area by buying the semantic site, Powerset.
Download ebooks on http://www.frenchtheory.com/ - See that post with different algorithms in metabole - See the journal French Metablog with today different posts - Jean-Philippe Pastor

Monday

My little SEO

It seems like everyone is practising a little search-engine optimisation (SEO) these days. While you might have the basics covered, do you know where SEO is heading? I’ve been optimising websites for search engines since the late 1990s and this has given me the fortunate opportunity to witness the evolution of search-engine algorithms over the years. Based on this experience, this article attempts to offer some insight on where SEO is moving.

The evolution of SEO has been an interesting one. The term was first coined in 1997, but the intentions behind it have been practised since the early days of web search; one could argue as far back as Jumpstation or even Archie in the early 1990s.

SEO is defined by Wikipedia as: “The process of improving the volume and quality of traffic to a website from search engines via ‘natural’ (’organic’ or ‘algorithmic[SB1]’) search results for targeted keywords.” When performed successfully, SEO can tap into the tremendous number of searches performed on a daily basis and deliver a considerable stream of traffic and revenue to a website owner.

It is this potential for huge reward that has meant that search engines have had to grow smarter over the years. The early search engines were so primitive that the first phase in the life of SEO began with on-page optimisation. Quite simply, webmasters could tweak the content and various elements of their web pages or documents and, in doing so, be relatively confident of ranking well on their chosen keywords and phrases. I say words and phrases because 15 years ago, one-word searches were commonplace. These days, as users have evolved, the average query length tends to be up to three or four words. (...)

Rob Stokes is the founder and Group CEO of Quirk eMarketing,
Download ebooks on http://www.frenchtheory.com/ - See that post with different algorithms in metabole - See the journal French Metablog with today different posts - Jean-Philippe Pastor

Saturday

Content no more




Linked by David Adams on Sat 5th Jul 2008 04:57 UTC, submitted by snydeq


Neil McAllister raises questions regarding the Web now that it no longer resembles Tim Berners-Lee's early vision: Is the Web still the Web if you can't navigate directly to specific content? If the content can't be indexed and searched? If you can't view source? In other words, McAllister writes, if today's RIAs no longer resemble the 'Web,' then should we be shoehorning them into the Web's infrastructure, or is the problem that the client platforms simply aren't evolving fast enough to meet our needs.
Download ebooks on http://www.frenchtheory.com/ - See that post with different algorithms in metabole - See the journal French Metablog with today different posts - Jean-Philippe Pastor

Tuesday

Google at the top


In the space of a year, Google supplants Microsoft as No. 1 in a survey of the level of trust American consumers have in various companies, while Microsoft drops to No. 10. Another triumph for the Windows Vista operating system.
Download ebooks on http://www.frenchtheory.com/ - See that post with different algorithms in metabole - See the journal French Metablog with today different posts -

Pond-skater minds with Google


Andrew Sullivan


Here’s something : Friedrich Nietzsche used a typewriter. Many of those terse aphorisms and impenetrable reveries were banged out on an 1882 Malling-Hansen Writing Ball. And a friend of his at the time noticed a change in the German philosopher’s style as soon as he moved from longhand to type.


“Perhaps you will through this instrument even take to a new idiom,” the friend wrote. Nietzsche replied: “You are right. Our writing equipment takes part in the forming of our thoughts.”


Gulp. The technology writer Nicholas Carr, who pointed out this item of Nietzsche trivia in the new issue of The Atlantic, proceeded to make a more disturbing point. If a typewriter could do this to a mind as profound and powerful as Nietzsche’s, what on earth is Google now doing to us?
Are we fast losing the capacity to think deeply, calmly and seriously? Have we all succumbed to internet attention-deficit disorder? Or, to put it more directly: if you’re looking at a monitor right now, are you still reading this, or are you about to click on another link?


The astonishing benefits of Google are barely worth repeating. When I started to contribute this column, I used to keep a month’s worth of The New York Times stacked in my study. If I recalled an article or a report that I wanted to refer to, I’d spend a happy few minutes wrestling with frayed and yellowing paper, smudging myself with ink, and usually ended up reading an article that had nothing to do with my search.


I needed a good memory – even visually – to track my vague recollection down. I needed time. I needed to think a little before I began my research. Now all I do is right-click and type a few words. And all is instantly revealed.


I spend most of my day blogging – at a current rate of about 300 posts a week. I’m certainly not more stupid than I used to be; and I’m much, much better and more instantly informed.
However, the way in which I now think and write has subtly – or not so subtly – altered. I process information far more rapidly and seem able to absorb multiple sources of information simultaneously in ways that would have shocked my teenage self.


In researching a topic, or just browsing through the blogosphere, the mind leaps and jumps and vaults from one source to another. The mental multitasking – a factoid here, a YouTube there, a link over there, an e-mail, an instant message, a new PDF – is both mind-boggling when you look at it from a distance and yet perfectly natural when you’re in mid-blog.


When it comes to sitting down and actually reading a multiple-page print-out, or even, God help us, a book, however, my mind seizes for a moment. After a paragraph, I’m ready for a new link. But the prose in front of my nose stretches on.
I get antsy. I skim the footnotes for the quick info high that I’m used to. No good. I scan the acknowledgments, hoping for a name I recognise. I start again.
A few paragraphs later, I reach for the laptop. It’s not that I cannot find the time for real reading, for a leisurely absorption of argument or narrative. It’s more that my mind has been conditioned to resist it.


Is this a new way of thinking? And will it affect the way we read and write? If blogging is corrosive, the same could be said for Grand Theft Auto, texting and Facebook messaging, on which a younger generation is currently being reared. But the answer is surely yes – and in ways we do not yet fully understand. What we may be losing is quietness and depth in our literary and intellectual and spiritual lives.
The playwright Richard Foreman, cited by Carr, eulogised a culture he once felt at home in thus: “I come from a tradition of western culture, in which the ideal (my ideal) was the complex, dense and ‘cathedral-like’ structure of the highly educated and articulate personality – a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West.
“[Now] I see within us all (myself included) the replacement of complex inner density with a new kind of self – evolving under the pressure of information overload and the technology of the ‘instantly available’.”
The experience of reading only one good book for a while, and allowing its themes to resonate in the mind, is what we risk losing. When I was younger I would carry a single book around with me for days, letting its ideas splash around in my head, not forming an instant judgment (for or against) but allowing the book to sit for a while, as the rest of the world had its say – the countryside or pavement, the crowd or train carriage, the armchair or lunch counter. Sometimes, human beings need time to think things through, to allow themselves to entertain a thought before committing to it.


The white noise of the ever-faster information highway may, one fears, be preventing this. The still, small voice of calm that refreshes a civilisation may be in the process of being snuffed out by myriad distractions.
I don’t want to be fatalistic here. As Carr points out, previous innovations – writing itself, printing, radio, televi-sion – have all shifted the tone of our civilisation without destroying it. And the capacity of the web to retrieve the old and ancient and make them new and accessible again is a small miracle.


Right now, we may be maximally overwhelmed by all this accessible information – but the time may come when our mastery of the new world allows us to gain more perspective on it.
Here’s hoping. Shallowness, after all, does not necessarily preclude depth. We just have to find a new equilibrium between the two. We need to be both pond-skaters and scuba divers. We need to master the ability to access facts while reserving time and space to do something meaningful with them.


It is inevitable this will take our always-evolving species and ever-malleable brains a little time – and the Google era in a mass form is not even a decade old.
Some have suggested a web sabbath – a day or two in the week when we force ourselves not to read e-mails or post blogs or text messages; a break in order to think in the old way again: to look at human faces in the flesh rather than on a Facebook profile, to read a book rather than a blog, to pray rather than browse.


I think I’ll start with Nietzsche at some point. But right now I have a blog to fill.



Download ebooks on http://www.frenchtheory.com/ - See that post with different algorithms in metabole - See the journal French Metablog with today different posts -



Monday

Is Google making us stupid?

Idea Watch: Is Google Making Us Stupid?
Posted by Tom Weber

Ever worry that all that time you spend on the Web might be rewiring your brain? In the July/August issue of the Atlantic magazine, writer Nicholas Carr confesses to that fear–and explores this question: “Is Google Making Us Stupid?

In a nutshell, Mr. Carr’s argument is this: Spending so much time reading on the Web is training us to accept information in small bites, and that’s worrisome. He writes:
“Immersing myself in a book or a lengthy article used to be easy … That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.”
Of course, the notion that the surf-happy world of the Web is affecting our attention span isn’t new. Even before the rise of the Web, other types of media–such as music videos–were being blamed for the same thing. But Buzzwatch suspects many readers will see something of themselves in the article’s description of people cramming ever-more bits of information into every last moment online.

Writes Mr. Carr:
If we lose those quiet spaces, or fill them up with “content,” we will sacrifice something important not only in our selves but in our culture.
Readers, do you feel that spending time on the Web is rewiring your brain? And if yes, do you care?

Trackback URL: http://blogs.wsj.com/buzzwatch/2008/06/10/idea-watch-is-google-making-us-stupid/trackback/

Save & Share:
wsj:http://blogs.wsj.com/buzzwatch/2008/06/10/idea-watch-is-google-making-us-stupid/
Yahoo! Buzz Share on Facebook Del.icio.us Ever worry that all that time you spend on the Web might be rewiring your brain? In the July/August issue of the Atlantic magazine, writer Nicholas Carr confesses to that fear–and explores this question: “Is Google Making Us Stupid?”
In a nutshell, Mr. Carr’s argument is this: Spending so much time reading on the Web is […]
--> Digg this Email This

Read more: Memes, Search, Idea Watch

Comments

I don’t know — what was the question again? My attention drifted off before the end of the post . . .
Comment by Independent Girl - June 10, 2008 at 2:17 pm
I did have a comment on this article but I’ve moved on in the past couple of minutes.
Comment by Common Sense - June 10, 2008 at 2:17 pm
Susan Jacoby, former Washington Post reporter and author of The Age Of American Unreason, talked about this back in February.

“A Nation Of Idiots?”http://www.boom2bust.com/2008/02/19/a-nation-of-idiots/
Comment by Boom2Bust.com - June 10, 2008 at 2:20 pm
I’ve found I’m just as able to immerse myself in some books and articles, but less so for other books and articles. Generally, the ones I’m able to immerse myself in are those that are good, and those I’m not able to immerse myself in are those that are mediocre or bad. If there is a crowding-out effect, it’ll be on bad books, and I don’t really have a problem with that.
Comment by Bob's My Uncle - June 10, 2008 at 2:21 pm

Why is this blog so slow?
Comment by Nevermore - June 10, 2008 at 2:21 pm
The same effect has been proven with television and kids, so it makes sense. Too many changes (commercials), too much content.
Comment by Sorry...wasn't paying attention... - June 10, 2008 at 3:06 pm
Its easy to see Mr. Carr has been Googling way too much
Comment by 823077 - June 10, 2008 at 3:37 pm
i definitely agree with this and have felt it much more difficult to read an “e-book” than a real book… If I print the book out then I’m fine…
Comment by boner to boner - June 10, 2008 at 8:45 pm
I’m an old guy. I am so much better informed now then in pre-google. And, because of these vast information tools I now am able to accomplish life’s tasks better and quicker. I now have time to appreciate a good read, nature, art and travel. I am in my best of times. And, not dumber but smarter I should think. Appreciate the gains you can expect from these tools in the future. Lucky you.
Comment by mark - June 11, 2008 at 8:34 pm
stupid google
Comment by s - June 11, 2008 at 8:58 pm
i shall perform a google search to find the answer to this question
Comment by e$ - June 12, 2008 at 4:17 pm
Trackbacks

[…] our look earlier today at concerns that the Web may be rewiring our brains for short attention spans, Buzzwatch thought it appropriate to highlight this video making the […]
Trackback by Buzzwatch : Daily Diversion: The Democratic Primary--For Short Attention Spans - June 10, 2008 at 4:51 pm
[…] The best answer to the question on copy length came to me at a seminar I attended: “People don’t read long copy or short copy. They read what interests them.” That’s why relevancy is even more important today then it was in 1988. People have information overload and, because of Google, perhaps, people have shorter attention spans. That was the theory I came across at a recent Wall Street Journal Blog entitled: “Is Google making us stupid?” […]
Trackback by Google=Stupid? « Marketing That’s Measurable - June 12, 2008 at 1:38 am

Sunday

Google's effect on our mind


In the debate over Google’s effect on humanity, everyone is missing one big issue
Posted on June 19, 2008 by Douglas Bell

Yes, again: The cover story that launched a thousand blog posts.
For the second time this week, I’m taking my lead from The Atlantic (it’s the best magazine in the world right now, making even The New Yorker appear precious and overwrought). Unsurprisingly, the two articles that stirred me to blog were both (a) about the Web and (b) rife with fundamental, flummoxing misperception. I’ve already written about Mark Bowden’s piece on the Web-induced demise of The Wall Street Journal. Now for the big kahuna: Nicholas Carr’s take on Google. Titled “Is Google Making Us Stupid,” this cover story has been sticking in bloggers’ craws all week, inspiring them to pee on hydrants to mark their view on the current state of media, the Web and the human condition. Carr’s view is clear: the hypertext world of Google is slowly eroding our capacity for sustained contemplation, thereby flattening our collective intelligence. One thing is also clear: the piece has an enormous blind spot.


Some agree with Carr, of course:
Is this a new way of thinking? And will it affect the way we read and write? If blogging is corrosive, the same could be said for Grand Theft Auto, texting and Facebook messaging, on which a younger generation is currently being reared. But the answer is surely yes—and in ways we do not yet fully understand. What we may be losing is quietness and depth in our literary and intellectual and spiritual lives. —Andrew Sullivan in The Times of London


Some disagree:
Maybe the reason why Nick and so many other literati are losing their patience with long form information is that it is so fundamentally inefficient and inferior to connected bits of information.
You look at a book, read a book, and you easily perceive a coherent whole. You look at all the information on that book’s topic on the Web, all connected, and you can’t see the sum of the parts—but we are starting to get our minds around it. We can’t yet recognize the superiority of this networked thinking process because we’re measuring it against our old linear thought process.
Nick romanticizes the “contemplation” that comes with reading a book. But it’s possible that the output of our old contemplation can now be had in larger measure through a new entirely non-linear process.—Scott Karp at seekingalpha.com


But here’s what both sides in the debate missed: Google’s motivation in all this is money. For all their drivel about corporate responsibility (Google’s motto is the impossibly pretentious “Don’t be evil”), Google “monetizes” (lovely word that) what Karp calls “networked thinking” by charging fractions of whatever currency to place links nearer to the front page of a given search.
And guess what? Google couldn’t care less whether this new form of thought is our salvation or our damnation. In business speak, they’re “content agnostic.” The debate about which is the better mode of rumination for optimal human development—algorithmic/hypertext or monastic/contemplative—is nothing but a sideshow.


So long as the dollar calls the tune at Google, it seems to me that how we read—Carr’s po-faced lament notwithstanding—is somewhat less problematic than whether what we read is the straight goods or just another ad campaign done up in digital drag.

Download ebooks on http://www.frenchtheory.com/ - See that post with different algorithms in metabole - See the journal French Metablog with today different posts -

Saturday

Search engines and their limits

We know that search engines (Google, Yahoo, MSN) and their associated retrieval technology anno 2008 are not much more sophisticated than they were, in say, 1998–or for that matter, in 1945 when a scientist, Vannevar Bush, released his essay, “As We May Think” (reference Internet Pioneers).

The system he references is remarkably similar to modern hypertext.
While new interfaces, video, images, binary streams of any kind you can think of are easily presented into plugins and other “wares,” we are still struggling to get to the “next level” of retrieval technology.
Algorithmic search, human-aided search and meta search engines are par for the course. To create a search engine that can also include artificial intelligence and provide scalability for the massive internet is still far away.
In the meantime, we are having fun with universal search/blended search, local search and such.
So in the spirit of the power of video and the explosion of its use on the internet, I found this interesting film from Los Angeles that uses a “thousand words” with pictures and the human spirit.
When search engines can figure out all the “things” they must capture, retrieve, organize and intellectually present–for example, in this video–we have reached a goal that search engine scientists everywhere would want and that, I hope, captures users the same way it does in the film below.
The final scene says it all.



Download ebooks on http://www.frenchtheory.com/ - See that post with different algorithms in metabole - See the journal French Metablog with today different posts -