Google login dialog does not have the "stay signed-in" checkbox anymore - you session is always persistent. Once logged in, this web browser will have access to your account even after reboot.
Google login dialog does not have the "stay signed-in" checkbox anymore - you session is always persistent. Once logged in, this web browser will have access to your account even after reboot.
To prevent a part of our web application from being scanned by search engines and other web crawlers, we add a robots.txt like
User-agent: *
Disallow: /path
It's so simple, what can go wrong?
A real story happened to me.
Turns out, my cloud platform - Google App Engine - has a caching and compression layer between the application and the Internet. It can gzip content for one client, cache it, and then return the same gzipped responses to other clients, even if they haven't specified the Accept-Encoding: gzip header; or even explicitly requested uncompressed content.
This unwise, in my opinion, behaviour is documented here: https://cloud.google.com/appengine/docs/legacy/standard/java/how-requests-are-handled#response_caching
Example:
# Force a gzipped response
$ curl -v -H 'Accept-Encoding: gzip' -H 'User-Agent: gzip' https://yourapp.appspot.com/robots.txt
...
content-encoding: gzip
...
Warning: Binary output can mess up your terminal. Use "--output -" to tell
Warning: curl to output it to your terminal anyway, or consider "--output
Warning: <FILE>" to save to a file.
# Now explicitly request uncompressed robots.txt
$ curl -v -H 'Accept-Encoding: identity' https://yourapp.appspot.com/robots.txt
...
content-encoding: gzip
...
Warning: Binary output can mess up your terminal. Use "--output -" to tell
Warning: curl to output it to your terminal anyway, or consider "--output
Warning: <FILE>" to save to a file.
(BTW, despite the doc says the default caching duration is 10 minutes, I observed Google App Engine returning gzipped responses for at least 30 minutes).
A web crawler (Dotbot from moz.com) has encountered such a gzipped robots.txt response and was unable to parse it, so considers all the URLs in the app domain as allowed for crawling. Moreover, the crawler caches this gzipped response. All its subsequent requests to robots.txt are conditional (ETag based, I think), and result in 304 Not Modified, thus the crawler continues relying on the gzipped version it cannot parse, and regularly visits the unwanted URLs.
Luckily, the Dotbot clearly identifies itself in the User-Agent header, and they have a working support email, so after a five month communication in a ticket I discovered the reason.
Fixed the Google App Engine behaviour by adding an explicit configuration to the appengine-web.xml:
<static-files>
<include path="/**">
<http-header name="Vary" value="Accept-Encoding"/>
</include>
<exclude path="/**.jsp"/>
</static-files>
Also made a little modification to the robots.txt, to be sure the ETag changes.
Stackoverflow and related sites repeatedly display their cookie confirmation dialog.
So many times I had to press "Customize settings", and there select "Confirm my choices".
This time I mistakenly pressed "Accept all cookies".
How to undo that? Why do they show it every time? Will they continue showing it, or it only annoys you until you press "Accept all cookies"?
How to automatically transform any imperative code into a functional code?
Copy all the data before invoking the imperative code.
That's not a joke. There are techniques (e.g. copy on write) and data structures which can make it efficient. I have no time to write-down all the existing analogies coming to my mind now. What I want to say, the imperative and functional approaches in many important aspects are not that different. We can think of imperative assignment to a variable as of pure function which computes new world where this variable has new value.
An idea I kept in mind for several years. Finally experimented with it: https://github.com/avodonosov/pocl.
I consider the experiment successful. It could be a useful technique of application acceleration.
Don't just use gray text on white background. If you really want to make your text difficult to read, you will achieve even better results with white text on white background.
Upd: I am not alone who thinks so: http://contrastrebellion.com/
my-application web-server 1.1.1 commons-logging 1.1.1 db-client 1.1.1 commons-logging 1.1.1 authentication 1.1.1 commons-logging 1.1.1Now commons-logging changes its API incompatibly and is released as commons-logging 2.0.1. Authentication adopts commons-logging 2.0.1 while other libraries still depend on 1.1.1:
my-application web-server 1.1.1 commons-logging 1.1.1 db-client 1.1.1 commons-logging 1.1.1 authentication 1.1.2 commons-logging 2.0.1Now my-application is broken, because the dependency tree includes two versions of commons-logging which share packages, class/functions names, and thus can not be loaded simultaneously.