Google Accelerator: Be Careful Where You Browse 89
Eagle5596 writes "It seems that there can be a serious problem with Google's Web Accelerator, and I'm not talking about the privacy concerns. Evidently some people have been finding that due to the prefetching of pages their accounts and data are being deleted."
Just goes to show.... (Score:4, Funny)
Re:Just goes to show.... (Score:1)
Re:Just goes to show.... (Score:2)
News at 11:00.
Re:Just goes to show.... (Score:2)
I am not getting into the "this is cool" or "this is evil" argum
it's all about intelligence (Score:1, Interesting)
Another POV... (Score:3, Insightful)
I'm not sure if I agree with the "Google is the new Microsoft" sentiments, but thinking before you install new software is always a good idea.
Re:Another POV... (Score:4, Insightful)
This bring the current list of reasons not to use the Accelerator up to three, counting the obvious privacy issues.
Re:Another POV... (Score:2)
I just read it incorrectly. Not an uncommon event on my part... >_
Re:Another POV... (Score:2)
Not necessarily. If to "logout" you need to click on a link, then that may potentially be cached and so you do not get logged out when you click on it. If the webapp is using a poor session implementation, it may lead to the same problem.
Websites using session-based authentication really should use a form and do a POST to do logout.
Of course, if web sites used http-auth (as they should), this wouldn't be a problem at all.
Re:Another POV... (Score:3, Informative)
Re:Another POV... (Score:2)
Re:Another POV... (Score:2)
I think the author is jumping the gun. I believe that this Google Web Accelerator was born from the "Hey, why not use Google's cache all the time when browsing sites on frequently slow servers?" idea, and that these issues are merely unintentional side effects that still need to be fixed (which will be pretty complicated if you ask me).
Still, Google will have the opportunity to store virtually the entire browsing history of Google Web Accelerator users, which people should keep in mind when installing the
Bug in the pages, not Google (Score:5, Informative)
Re:Bug in the pages, not Google (Score:2)
input type=image (Score:3, Informative)
Re:input type=image (Score:2)
The only two "common" ways that I'm aware of to submit a form as a POST action are to use a submit button or to fire the submit the form in a scripted event.
If you know of a way to submit a POST action from a text link without using javascript, please share it with the rest of us.
Re:Bug in the pages, not Google (Score:1, Informative)
Re:Bug in the pages, not Google (Score:1)
Re:Bug in the pages, not Google (Score:3, Interesting)
If you want to POST something, the only way to do that is to use a form. Forms cause a few problems.
IE and Opera render forms slightly "creatively". Wherever a form ends, the browser inserts vertical space in many situations, some of which are unavoidable. This usually makes the page render very strangely. If I want a list of links, and some of them have side-effects and some don't - my choices are to make some of them forms and some regular
Re:Bug in the pages, not Google (Score:2)
Re:Bug in the pages, not Google (Score:5, Informative)
If you want to POST something, the only way to do that is to use a form. Forms cause a few problems.
With all due respect, even though forms aren't perfect, they've been around over a decade, and if you can't deal with them by now, don't bother calling yourself a web developer.
Wherever a form ends, the browser inserts vertical space in many situations, some of which are unavoidable.
You're kidding, right? If you don't want a bottom margin, say so with CSS. This is basic FAQ newbie stuff [htmlhelp.org].
If you want a regular text link to submit a form, you have to use Javascript.
You can use CSS to make the button look like a text link.
This creates a dependancy on Javascript
No it doesn't. You can easily use Javascript without depending on it. That's the way it's supposed to be used. This too is basic newbie stuff.
Other issues with form POSTing include the inability to use the back button after POSTing.
Huh? Works fine here.
there's no way for webmasters to tell the browser not to pop up with the "Are you sure you want to resend the POST action again?" window.
That's not a bug, that's a feature! POST is not idempotent. Resubmitting a POST is something that absolutely needs to be warned about, because it's a fundamentally different action to reloading a page with GET.
GET followed by refresh == just GET it again
POST followed by refresh == send the server some more data
So, if we choose to follow the HTTP guidelines, we break UI and style guidelines even worse.
There is a reason submit buttons look different to links. It's because they do different things. There are semantics associated with clicking a button that aren't associated with clicking a link. If style guidelines instruct you to make submit buttons look like links, then the style guidelines are probably broken.
So, if we choose to follow the HTTP guidelines, we break UI and style guidelines even worse. If we want to use POST we have to give up having the page rendering correctly in major browsers, break the back button, break the ability to bookmark state information (unless you encode variables both in the URL in get fashion AND others in a POST), and make every link either an image(bad for accessability and download speeds) or use some Javascript magic (even worse for bookmarkability and accessability).
Wow. Get with the times. No really. I'd expect this kind of attitude from a newbie developer in the mid 90s.
Re:Bug in the pages, not Google (Score:2)
You're kidding, right? If you don't want a bottom margin, say so with CSS. This is basic FAQ newbie stuff.
Yes, and IE ignores it in some situations, and in some places will size your table even though it had added the space.
here's no way for webmasters to tell the browser not to pop up with the "Are you sure you want to resend the POST action again?" window.
That's not a bug, that's a feature!
Re:Bug in the pages, not Google (Score:2)
I know Slashdot isn't a shining example of HTML compliance either
Nuff said.
That "Logout" link has a side effect of going to it, and it's a GET.
I'll say it anyway. It shouldn't.
At the most basic level, even tracking "how many people have seen this page" is an effect of loading it, that is affected by undesired prefetching. Keeping track of which pages are most recently accessed to handle server side caching of dynamic content is an effect of loading a page, even when no data on the page is changed.
Re:Bug in the pages, not Google (Score:2)
huh.
you mean that google doesn't obey robots.txt?
That suprises me.
Re:Bug in the pages, not Google (Score:2)
I know there isn't an exact line between what counts as a side effect and what
Mod, oh MOD PARENT UP! (Score:1)
Re:Bug in the pages, not Google (Score:2)
<a href="/link.script" method="post" variables="a=1;b=2">"
I guess it it fortunate for us that you'll never see it - no browser would implement such a thing. It is contrary to the spirit of HTML in general and links specifically.
See the WhatWG discussion [dreamhost.com] of this sort of thing for more reasons why it sucks.
Re:Bug in the pages, not Google (Score:1, Interesting)
Implementors should be aware that the software represents the user in their interactions over the Internet, and should be careful to allow the user to be aware of any actions they might take which may have an unexpected significance to themselves or others.
In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be c
Re:Bug in the pages, not Google (Score:3, Informative)
FFS, how can these stupid web designers be threatening to sue Google when the HTTP itself (protocol of the WWW which they should all have read) say
Re:Bug in the pages, not Google (Score:2)
-Bill
Re:Bug in the pages, not Google (Score:2)
Re:Bug in the pages, not Google (Score:1)
Most websites uses some sort of link-checking program on a scheduler to make sure they didn't accidentally create broken-links within their own website.
Such link-check programs also follows all the links in your webpage.
Bug in the webpage. Nothing to do with Google.
Re:Bug in the pages, not Google (Score:2)
Well (Score:1)
Re:Well (Score:1, Informative)
If it can't determine whether or not a dynamic link (like "delete this") is harmful or not
The thing is, it can determine whether or not a dynamic link is harmful or not. GET is supposed to always be safe. The HTTP specification says so. Stupid web developers used GET in an unsafe way and are paying the penalty because Google thought something that's defined as being always safe is, well, safe.
Stupid web developers (Score:2, Informative)
The root of the problem is stupid web developers ignoring RFC 2616 and using the GET method to change state.
Now all the people who cut corners thinking it didn't matter have been caught with their pants down, they look silly because the web applications they wrote are losing data, so they have gotten angry and pointed the finger at Google.
Sorry kids, but this is what happens when you don't follow the specs. They are there to make all our lives easier, you ignored them, you fucked up.
Yeah, maybe G
Re:Stupid web developers (Score:4, Insightful)
Seriously, using POSTs was something we all learned in 1994... Hopefully, this Google accelerator thingy will be popular enough to rid us of these creaky old broken sites.
Re:Stupid web developers (Score:4, Insightful)
Unless you have another idea, using GET for states is here to stay.
Re:Stupid web developers (Score:2, Interesting)
You can use POST without sacrificing bookmarkability. After your code processes the POSTed request, redirect to a GET-style URL that provides a view to the same content.
This technique is quite common.
Yikes! (Score:2)
Re:What the cunting fuck. (Score:3, Funny)
Re:What the cunting fuck. (Score:2)
I would also strongly congratulate them on complying with WWW standards for a change--and indeed I have done in the past on those few occasions when MS has chosen the path of standards.
Re:What the cunting fuck. (Score:1)
lol fag
Re:What the cunting fuck. (Score:2)
The rules of society (inc. Internet) are there for a reason. If you break the laws/rules, and I do something that wouldn't normally hurt you (if you were
Re:What the cunting fuck. (Score:2, Insightful)
Re:What the cunting fuck. (Score:3, Informative)
Re:What the cunting fuck. (Score:1)
They screwed up and I hope everyone remembers this for a while. They had better not screw up like this again, an
Re:What the cunting fuck. (Score:2)
The architects of HTTP (as people who know how the {WWW/railway} works) clearly envisiged that people should not {cross the track/design their sites with GET requests that change stuff} because a {train/web
Re:What the cunting fuck. (Score:1)
A correct analogy: A train track goes unused for many years. Despite warnings, it becomes a popular playing area for children, due to the surrounding trees, the open space, and the interesting terrain. Everyone is aware that hundreds of children play on the disused track every day.
One day, some cunt runs a high speed service down the track and kills 50 kids. Whose fault is it?
Re:What the cunting fuck. (Score:2)
Re:What the cunting fuck. (Score:2)
Anyway, I've had major sleep deprivation (mainly with UK general election--I was an election agent) hence atrocious syntax.
Here's what the laws/standards of the Internet say (verbatim) in the section on safety with section number 9.1.1 (irony?) which all those whiney web designers really should have actually bothered to read (my emphasis):
Re:What the cunting fuck. (Score:1)
Incidentally, you're a retard and I am burning karma so fuck you.
Re:What the cunting fuck. (Score:2)
Re:What the cunting fuck. (Score:1)
It doesn't matter two stone shits that the existing state of affairs is in breach of the specs; if Google released a webbrowser that wrote pseudo-random 1s and 0s to the entire harddrive several times over whenever it encountere
In a sane world perhaps... (Score:2)
In a sane world, yes. In places like the U.S. the rail line would be quickly writing lots and lots of settlement checks.
My Dad worked for a power company that had to settle over a case of a kid breaking into an electrical substation and getting injured, where "breaking in" means doing something along the lines of climbing a 15-foot fence.
They settled, because they were afraid they would lose the law
Re:In a sane world perhaps... (Score:2)
This is the reason why I think the designers should assume responsibilty. Because the standard says so, and anyone who calls themse
Re:In a sane world perhaps... (Score:2)
The kids shouldn't be playing there, but that doesn't mean the automatic train idea is smart.
I think the only real shock in all this is that no one at Google was aware GET/POST was as abused as it is.
Re:In a sane world perhaps... (Score:2)
It's just like Stronghold 2, which I just bought. Now a game isn't quite the scale of a tool like this, but within a couple of hours, I'd found a good half dozen serious UI bugs and a number of significant UI design problems. The irony is that the game mechanics seem sound... these are probably fairly easy problems to fix. It amazes me how many apps are shipped with glaring errors that are evident withi
Re:What the cunting fuck. (Score:1)
Slashdot Editors: Be Careful What You Post (Score:2)
This was already posted on
Re:Slashdot Editors: Be Careful What You Post (Score:2)
-Bill
google got hacked (Score:1)
Maybe now people will fix the security holes... (Score:1)
If you can delete content by following a link, then this is a major security hole. Any website could easily embed such a link into java, javascript, even just an image link. Someone could send you an email with an image referencing the link. This is one place you should be following the spec. If you're making an important side-effect, use POST.
Isn't googlebot just as dangerous? (Score:1, Interesting)
What would you say to a webmaster that sticks "delete" links everywhere on their pages, and suddenly finds that Googlebot, in its daily rounds, wipes out their entire wiki?
Somebody isn't following the standards (Score:1, Insightful)
It's safe, it's an emerging standard, and webmasters maintain control. Why isn't Google following the standard?
Appreciate the irony (Score:2, Informative)
I hope you appreciate the irony of posting such comments on a site whose Logout link is implemented via a GET (see upper left of your screen.) That's the point: every site implements Logout as a link, and Google should have recognized this.
PS while I'm writing I might as well point out my previous GWA comment [slashdot.org] from a few days be
Re:Appreciate the irony (Score:1)
Anyways, did anyone notice that another problem the prefetch creates is bandwidth costs on poor websites. Does GWA follow robots.txt [I guess not, since then a lot of sites will be off bounds]?
My 2 cents.
Re:Appreciate the irony (Score:2)
You know what's funny... (Score:2)
All this stuff we bitch and moan about here probably won't make a dent in the adoption of Google's accelerator and they're just going to run roughshod over webmasters whose sites do
maximum capacity reached? (Score:1)
"Thank you for your interest in Google Web Accelerator. We have currently reached our maximum capacity of users and are actively working to increase the number of users we can support."
Maybe has this someting to do with all this security concerns?