jueves, junio 21, 2007

Howto debug network errors in squid with wireshark

If you see in the squid logs errors such as:

> 2007/06/12 11:15:45| parseHttpRequest: Unsupported method '^C'
> 2007/06/12 11:15:45| clientReadRequest: FD 145 (x.x.x.x:62332) Invalid
> Request
> 2007/06/12 11:15:48| parseHttpRequest: Requestheader contains NULL
> characters
> 2007/06/12 11:15:48| parseHttpRequest: Unsupported method '^C'
> 2007/06/12 11:15:48| clientReadRequest: FD 1611 (x.x.x.x:60853) Invalid
> Request
> 2007/06/12 11:15:49| parseHttpRequest: Requestheader contains NULL
> characters
> 2007/06/12 11:15:49| parseHttpRequest: Unsupported method '^C'

You could do some debug with utils such as ethereal/wireshark:

PROCEDURE:

1. Start wireshart, and start a new packet capture.

2. When seeing the error in cache.log, enter the filter
ip.address == YYYYYY && tcp.port == XXXXX

where YYYYY is the IP and XXXXX is the port number from the Invalid
Request log line.

Then select the first packet shown (should be a SYN), and choose Analyze
-> Follow TCP Stream. This opens a new window with the TCP stream
decoded. In this area you'll have all important data about the problem.

miércoles, mayo 30, 2007

Differences between an HTTP/FTP/SSL proxy and a SOCKS proxy

Squid is a HTTP proxy. All communication to Squid is HTTP. But HTTP proxies can resolve a number of different URL-schemes for their HTTP clients.

The difference between an HTTP proxy and SOCKS is that the HTTP proxy is
fully aware of the protocol being proxied, and as result have well defined behavior. The main difference is the ability to cache results.

SOCKS on the other hand is protocol neutral and SHOULD NOT perform caching or other heavily protocol dependent actions.

With a web browser you can support both kind of proxys in order to surf the web.

lunes, abril 02, 2007

Transparent SSL proxy

Does squid support this feature actually?. Yet NO (squid-2.6-Stable12)

Several people ask on the squid mailing list they are working with a transparent proxy but they need transparently "proxy" the 443 port "HTTPS" they mean that their transparent proxy could work with https urls.

But there are some misconceptions between HTTPS/SSL and proxy/reverse proxys.

Brief description:

Working as a normal proxy, squid can tunnel SSL requests when are requested by a HTTP user-agent (Netscape Documentation) vía HTTP proxies.
This involved a HTTP method (CONNECT) for establishing the tunnel.

But in a interception proxy know as transparent proxy as well, the proxy becomes the server for the client and becomes the client for the web server. The connection between the two parts who starts the connection is broken and the identity of each is hidden (SSL), so in this special case the transparent proxy doesn't know how to handle the SSL requests because is not operating as a normal proxy.

Some ideas to implement:

  • Listen on a different port than the current port used for the transparent proxy (usually 80)
  • Accept the SSL connection.
  • Do the acl lookups sourc/destination IP, source MAC, , time srcdomain...
  • Convert it to a HTTP CONNECT request suitable for the http proxy.
Another small project to work in with squid.


miércoles, marzo 14, 2007

A fun project with squid (soc?)

Right now, google is starting the soc, in order to get more open source projects and helping to the software students into participating in open source development.

Some ideas to help with squid:

In the squid list we are seeing several sites which are not working with squid due to a broken sites, this sites could not work due to a multiple factors, a fun project should be to make a software in order to test this broken sites.

Some issues with these sites are:
  • ECN
  • Windows Scaling
  • Forgetting Vary
  • Mixing up ETag (same ETag on multiple incompatible entities)
  • Various malformed responses
    • Double content lenght
    • Malformed headers
    • Repeated single-value headers
And you can help a lot of with squid too, only take a look to bugzilla.

Any more squid project related?

miércoles, marzo 07, 2007

Squid accelerator tips to serving content when a backend server is down

We can have squid to serve stale objects although the backend server is down.

  • Make sure negative_ttl is set to 0 seconds to disable the caching of errors.
  • If the web server become unreachable, set the connect timeout sufficiently short. There are three different connect timeouts depending on your config and requirements:
- config_timeout: for requests going DIRECT
- peer_config_timeout: for requests going to a cache_peer
- cache_peer ... timeout=XXX: specific timeout for this cache_peer, this one override peer_config timeout.

Default values are 1 minute for requests going direct and 30 seconds for requests sent to a cache_peer.

As a general statement in accelerator setups, you want the backend connect timeout quite short, a few seconds.

martes, febrero 27, 2007

New look in the squid web.

Adrian Chadd is an active squid developer who has helped with the new look on the web page.



The squid team is remarking the new "How to Help out" section in order to catch more people helping to improve the squid software.

It seems in a near future We'll have a new merchandise section where the people from the list could buy squid related things like t-shirts, cup of tea or similar in order to help the project.

Personally I feel the squid list is a good source of information so We all would have to feel more close to the squid and to try help to the squid team by some way . Because YOU can help in a differentd ways.

jueves, febrero 15, 2007

SSL support, Squid 2.6 branch and RedHat

Testing the last squid version (squid-2.6Stable9) with ssl support in order to operate as a reverse proxy, I get this errors in the compilation process:

---cut---
ssl_support.h:49: syntax error before '*' token
ssl_support.h:49: warning: type defaults to `int' in declaration of `sslCreateServerContext'
ssl_support.h:49: warning: data definition has no type or storage class
ssl_support.h:50: syntax error before '*' token
ssl_support.h:50: warning: type defaults to `int' in declaration of `sslCreateClientContext'
ssl_support.h:50: warning: data definition has no type or storage class
ssl_support.h:54: syntax error before "SSL"
ssl_support.h:56: syntax error before '*' token
ssl_support.h:57: syntax error before '*' token
ssl_support.h:58: syntax error before '*' token
ssl_support.h:59: syntax error before '*' token
ssl_support.h:60: syntax error before '*' token
---cut--

What's happening?

The problem here is with RedHat, They have built OpenSSL with Kerberos support

#[root@proxy squid-2.6.STABLE9]# rpm -qR openssl-devel-0.9.7a-33.12
krb5-devel

but Kerberos is not in the standard library and include path. This makes it impossible to build OpenSSL applications without manually including /usr/kerberos in the include and library paths.

More info.

martes, febrero 06, 2007

Reverse proxy configurations

The configuration of a reverse proxy, it depends on what functionality you want to achive.
There is three ways of using this depending on what your functionality
requirements are:

a) With Squid acting as an accelerator/reverse proxy for a defined list
of sites, upgrading these sites to https. You then use the ssl option to
cache_peer to wrap the request in SSL.

b)
By using a HTTP client sending https:// URLs to Squid. Squid will
then maintain the SSL on behalf of the client.

Here, the client has to send the https:// request using HTTP to the
proxy, just as it does for http://. This is:
GET https://www.example.com/path/to/file HTTP/1.1
[headers]
It does not work for clients using the CONNECT method asking for a SSL
tunnel over the proxy.

At this case, the clients are knowing they should not run the SSL themselves and
delegating this task to the proxy. They don't have any SSL capabilities and instead
rely on the proxy to perform the SSL encryption.


c) Using a url rewriter helper to rewrite selected http:// URLs into
https:// per your own specifications, making Squid process the request
as a https:// request even if the client requested http://

At this case, the clients are emulating this by rewriting http:// URLs into https://
at the proxy.

It's also possible to extend Squid with the capability to decrypt
CONNECT SSL proxy requests allowing inspection of https traffic.
For more information on this way you can contact with Henrik Nordstrom.
Contactos de squid.

lunes, febrero 05, 2007

Squid running out of free ports.

Symptons from a busy squid with high traffic:
commBind: Cannot bind socket FD 98 to *:0: (98) Address already in use

Solution:

You have run out of free ports, all available ports occupied by
TIME_WAIT sockets.

Things to look into

1. Make sure you internally use persistent connections between Squid and
the web servers. This cuts down on the number of initiated connections/s
considerably.

2. Configure the unassigned port range as big as possible in your OS. On
Linux this is set in /proc/sys/net/ipv4/ip_local_port_range. The biggest
possible range is 1024-65535 and can sustain up to at least 500
connections/s continuous load squid->webservers.

What does Squid do or act like when its out of file descriptors?

When Squid sees it's short of filedescriptors it stops accepting new
requests, focusing on finishing what it has already accepted.

And long before there is a shortage it disables the use of persistent
connections to limit the pressure on concurrent filedescriptors.

What does it to do in such case?

Once Squid has detected a filedescriptor limitation it won't go
above the number of filedescriptor it used at that time, and you need to
restart Squid to recover after fixing the cause to the system wide
filedescriptor shortage.

do squid recover or do it need to be restarted?

depends on the reason to the filedescriptor shortage.

If the shortage is due to Squid using very many filedescriptors then no
action need to be taken (except perhaps increase the amount of
filedescriptors available to Squid to avoid the problem in future).
Squid automatically adjusts to the per process limit and hitting the
system wide limit if it's lower than the per-process limit.

If the shortage is due to some other process causing the systems as a
whole to temporarily run short of filedescriptors or related resources
then you need to restart Squid after fixing the problem as Squid has got
fooled in this situation into thinking that your system can not support
a reasonable amount of active connections.