Company Offers Customizable Web Spidering 46
TechReviewAl writes "A company called 80legs has come up with an interesting new web business model: customized, on-demand web spidering. The company sells access to its spidering system, charging $2 for every million pages crawled, plus a fee of three cents per hour of processing used. The idea is to offer Web startups a way to build their own web indexes without requiring huge server farms. 'Many startups struggle to find the funding needed to build large data centers, but that's not the approach 80legs took to construct its Web crawling infrastructure. The company instead runs its software on a distributed network of personal computers, much like the ones used for projects such as SETI@home. The distributed computing network is put together by Plura Processing, which rents it to 80legs. Plura gets computer users to supply unused processing power in exchange for access to games, donations to charities, and other rewards.'"
Ah, abusing someone else's bandwidth... (Score:4, Insightful)
Lets assume that spidering a page costs 10 kB of data.
So thats $2 for 1M pages, or 10 GB of data download.
So thats at least $1 of data transfer that is being shifted onto the suckers, err "volunteers" who's home network is running this app.
Re: (Score:2)
Re: (Score:2)
All in all, I'd have to say this is a pretty good idea.
It's the bandwidth (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Almost everyone running Plura's crap is unaware of it. It's embedded in web pages like advertising. For example, you know the highly popular Desktop Tower Defense?
http://www.handdrawngames.com/DesktopTD/Game.asp [handdrawngames.com]
Look at the page sources. There's a Plura bug on it running the whole time you're playing the game. They've already been doing this for a long time.
Nifty... (Score:3, Interesting)
Re: (Score:1)
Hrm... (Score:5, Insightful)
Sounds like a legitimate front for identity thieves, spammer, or even worse... Marketers.
I suppose its easier to do than running your own bot net.
Re: (Score:1, Funny)
or even worse... Marketers
So, which one should it be - insightful or redundant?
ooo free games (Score:2)
Re: (Score:2)
Seems cheap! (Score:4, Insightful)
Re: (Score:1)
aEN
Re:Seems cheap! (Score:4, Funny)
Japanese girls puking each other's mouths...Nope
Bestiality...Nope
Brazilian fart porn...A bit
As my first try of Bing, that wasn't very impressive.
Re: (Score:2)
You've missed the point... or you've never tried to use Google programmatically.
Google's search APIs are all bound to Javascript now. There is no way to connect with them from your Java, Python or Ruby application. Not, that is, if you don't want to get your IP(s) blocked for running too many queries.
This spidering service provides something similar to what Alexa Web Search once did.
Buried in Digsby (Score:4, Informative)
This is apparently the service that caused a lot of controversy when people discovered it was somewhat hidden in Digsby [wikipedia.org].
Re: (Score:2)
Digsby developer "chris" has stated that CPU usage is limited to 75% for desktops, and 25% for laptops unless operating on battery power.
Does that sound like an insane amount of CPU usage for damn IM client to be using to anyone else? Why the hell would the embed plura into an IM client anyway? This whole thing seems too fishy to me.
Re: (Score:1, Insightful)
Why the hell would the embed plura into an IM client anyway?
Unfortunately, it's all about money.
This will work (Score:1)
Free web index for download (Score:2, Informative)
There is a spider crawling the web that claims to be building a free, downloadable web index for similar purposes.
Torrent link for the index and information at http://www.dotnetdotcom.org/ [dotnetdotcom.org].
Just what I need (Score:1)
Who are the customers? (Score:4, Interesting)
I can see how they might get a fair number of people to donate their spare cycles for this, if the rewards are seen as sufficiently interesting. But are there really a whole bunch of startups (or other companies) that are really champing at the bit to create a new search engine? Other than marketers or malware purveyers, I mean. And do these searches honor robots.txt exclusions?
BTW I took a quick look at 80legs' website in an attempt to get these answers. I came up empty in that regard - so I will comment on how the CEO's hair makes him look like an in-disguise member of the Conehead family. Seriously, what's with the hair?
Congratulations, sir (Score:1)
Congratulations on the proper use of the word "champing". I hear people use "chomping" in that context all the time, and can't recall the last time I heard the correct word.
Re: (Score:2)
Re: (Score:2)
Hell they should just let people donate their spare cycles for cash. I'd do it.
Re: (Score:1)
Occam's razor. (Score:3, Insightful)
Reality... (Score:2, Insightful)
I am surprised... (Score:1)
I'm a little confused (Score:3, Interesting)
Is there really a big demand out there for outsourced spidering? I had not heard of this market. They seem to be implying that there are all these start-up outfits out there who have invented really amazing, unique UIs that allow people to find exactly what they need on the Web, and all they need to be successful is access to a searchable index. Huh??
I mean, if you're going to be some kind of start-up search engine or "semantic company" (whatever that means), shouldn't Web spidering be your core competency? If you're going to differentiate yourself in the market, how can you buy spidering as a commodity? How to you expect to attract any investment if you're telling potential investors that you rent your spidering capability from another start-up -- let alone one that uses some kind of half-baked P2P technology to do the work?
Seriously, in a world where Google seems willing to partner with just about anybody who needs any kind of searching for reasonable rates, what is this company's proposed customer base? (And no, the Technology Review article includes no quotes from customers at all.)
Re:I'm a little confused (Score:4, Informative)
Raw spidering is pretty much a commodity already. You're issuing GET requests over HTTP (for the most part). The "semantic" stuff comes in to play analyzing the results and doing interesting things with raw information you get back. If people can spend more time focused on doing the 'interesting bits' and less time on having to scale up to pull in the raw data to analyze, they'll be better off for it and more likely to be able to focus on creating something new/interesting/distinguishing.
People (generally) don't write their own web servers, nor their own TCP/IP stacks, often don't write their own session handling logic, or security code. All of these things have been commoditized. Perhaps too many people are relying on 'cloud computing' these days, but hosting and storage 'in the cloud' is where all the cool kids are playing right now (I don't necessarily agree with it, and probably wouldn't put all my eggs in that basket myself, but others are doing so). Spidering may be the next frontier to get commoditized.
Perhaps not everyone is comfortable 'partnering' with Google for everything? If someone was going to work on developing the 'next big thing', would you rather invest in something where the people had spent an inordinate amount of time building network capacity up to do drone work, or used a service like 80legs, or built the prototypes on Google's servers? Depending on the project, any of those make sense, but I'd prefer to use a service like 80legs myself. They're small enough and hungry enough they should give top notch customer service at this stage, whereas Google's not going to give you a number to call for direct service (maybe they do if you're spending loads of money, but then you're back to wise use of money).
The P2P aspect of how they're doing the spidering may be clever, but I'd rather see a more direct use of data-center resources around the globe, rather than relying on a seti-like participation model.
Re: (Score:3, Informative)
Advertising uses a fair amount of spidering for such things as contextual searching (where has a user been and what are their interests). Amazon was completely apatheic, in regards to a company who offered 50 mil for sending them crawling business. I was surprised, to say the least. When it was attempted to do so piecemeal, Amazon got very upset. So there's a demand, but it's probably not very large (# of capitalized consumers).
Rent our botnet! (Score:3, Interesting)
This looks like an attempt to monetize a botnet. What, exactly, do the people running their "client" get out of this? Do they know they're sucking bandwidth, and possibly being billed for it, on behalf of someone else?
I run a web spider [sitetruth.com] of sorts. And I know the people who run a big search engine. Reading the web sites isn't the bottleneck. Analyzing the results and building the database is. Outsourcing the reading part doesn't buy you much. If this just did a crawl, it would be of very limited value. That's not what it does.
What they're really doing [pbworks.com] is offering a service that lets their customers run the customer's Java code on other people's machines in the botnet. That's worrisome. There are some security limits, which might even work. Supposedly, all the Java apps can do is look at crawled pages and phone results home. Right.
This thing uses the Plura botnet. [pluraprocessing.com] "Plura® is a grid computing system. We contract with affiliates, who are owners of web pages, software, and other services, to distribute our grid computing code. We utilize the excess resources of peripheral computers that are browsing the internet when such browsing leads to a web page of one of our affiliates. That web page has imbedded code that allows the visitor to participate in the grid computing process. We also utilize embedded code in software and other services to allow such participation." Not good.
The main infection vector is apparently the Digsby chat client [lifehacker.com], which comes bundled with various crapware. The Digsby feature list [digsby.com] does not mention that Plura is in their package.
This thing needs to be treated as hostile code by firewalls and virus scanners.
Re: (Score:2, Interesting)
Outsourcing the reading part doesn't buy you much. If this just did a crawl, it would be of very limited value. That's not what it does.
Wrong. If I want to spider a single web site, many sites have rate-limiters that kick in and will block me after a while. This would allow me to hit it from multiple machines.
There are some security limits, which might even work. Supposedly, all the Java apps can do is look at crawled pages and phone results home. Right.
Why the sarcasm? This seems like a perfect use case for the JVM's security mechanism.
Re: (Score:2)
many sites have rate-limiters that kick in and will block me after a while. This would allow me to hit it from multiple machines.
Many sites have rate limiters to prevent denial-of-service attacks. This would allow easy DDoS attacks.
ftfy
Re: (Score:2)
If I want to spider a single web site, many sites have rate-limiters that kick in and will block me after a while. This would allow me to hit it from multiple machines.
The better web spiders run very slowly as seen from each site. At one time, Google only read about one page every few minutes per site. The Internet was slower then. Cuil's crawler is known to be overly aggressive, but that's a design flaw. (Too much distribution, not enough coordination.)
At SiteTruth, we never read more than 20 page
List of known applications (Score:1)
Can we generate a list of applications known to use plura? or does one already exist?
Re: (Score:1)
insuma.de (Score:1)