Message ID | 1739bfe0907150649l61d6b0dga6825e84dc45832d@mail.gmail.com (mailing list archive) |
---|---|
State | Superseded, archived |
Headers |
Return-Path: <jtmze87@gmail.com> Received: from mail-bw0-f217.google.com (mail-bw0-f217.google.com [209.85.218.217]) by open-mesh.net (Postfix) with ESMTP id 80CC315439D for <b.a.t.m.a.n@lists.open-mesh.net>; Wed, 15 Jul 2009 14:13:26 +0000 (UTC) Received: by bwz17 with SMTP id 17so1549823bwz.45 for <b.a.t.m.a.n@lists.open-mesh.net>; Wed, 15 Jul 2009 06:49:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type:content-transfer-encoding; bh=KUP84LleM9tho4dTasBJk2RfzpxiRod69xmQrBs1kOE=; b=X2QWUa19EU2L08RlyDK2ei44LRewmQulRvv25iRtIS8XJxYh1Dnv81AwLCFqZACFD+ IoAquVyeM0z0uGfP1qSebVxamjx4vR7JhdD2TWSPvb+FBSkNQKe0wPs90bNx/UWm3Tno NGFHuxZe0RWQIX84EeHbbAmh7bpszyG0taiA8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type :content-transfer-encoding; b=gIXDnm88nsI4chhrV3y6Q7F+W8r1oFgjAQ5bh++PG3XGxj4lQsbVM6O5t457/sDVDo 4gnldyC3r60rAdGVO5NeBeYQvIbkUvbhrNFx5U3/DLE0QLzOlKLnMBYt/urWUXDG40bV zrDE7AMneDTDNBAUcb13LJ0A7fpTWFzh+QPro= MIME-Version: 1.0 Received: by 10.204.56.4 with SMTP id w4mr7626033bkg.25.1247665774292; Wed, 15 Jul 2009 06:49:34 -0700 (PDT) Date: Wed, 15 Jul 2009 15:49:34 +0200 Message-ID: <1739bfe0907150649l61d6b0dga6825e84dc45832d@mail.gmail.com> From: jonathan mzengeza <jtmze87@gmail.com> To: b.a.t.m.a.n@lists.open-mesh.net Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: base64 Subject: [B.A.T.M.A.N.] [vis] fixed partial json output X-BeenThere: b.a.t.m.a.n@lists.open-mesh.net X-Mailman-Version: 2.1.11 Precedence: list Reply-To: The list for a Better Approach To Mobile Ad-hoc Networking <b.a.t.m.a.n@lists.open-mesh.net> List-Id: The list for a Better Approach To Mobile Ad-hoc Networking <b.a.t.m.a.n.lists.open-mesh.net> List-Unsubscribe: <https://lists.open-mesh.net/mm/options/b.a.t.m.a.n>, <mailto:b.a.t.m.a.n-request@lists.open-mesh.net?subject=unsubscribe> List-Archive: <http://lists.open-mesh.net/pipermail/b.a.t.m.a.n> List-Post: <mailto:b.a.t.m.a.n@lists.open-mesh.net> List-Help: <mailto:b.a.t.m.a.n-request@lists.open-mesh.net?subject=help> List-Subscribe: <https://lists.open-mesh.net/mm/listinfo/b.a.t.m.a.n>, <mailto:b.a.t.m.a.n-request@lists.open-mesh.net?subject=subscribe> X-List-Received-Date: Wed, 15 Jul 2009 14:13:26 -0000 |
Commit Message
jonathan mzengeza
July 15, 2009, 1:49 p.m. UTC
Partial output was caused by failing to read the HTTP request. This
patch reads the HTTP request into a temporary buffer before discarding
it.
Signed-off-by: Jonathan Mzengeza <jtmze87@gmail.com>
send_buffer ) );
Comments
> Partial output was caused by failing to read the HTTP request. This > patch reads the HTTP request into a temporary buffer before discarding > it. This patch creates an endless loop on unrecoverable socket errors. See read(3) for more information about return codes. Please provide more information if I am wrong. Best regards, Sven
> Thanks, is this better?
There are different things I don't understand complete. How does the receive
buffer affect the output buffer in this case?
Isn't it possible to disable non-blocking sockets in vis.c:635-638 and change
the read to following for json: Try to recv data until the read fails or an
empty line (please check HTTP rfc before implementing) appears. If the read
fails -> discard everything. If the empty line appeared then send the data.
You should create an extra function to read (and discard) the header stuff.
My personal opinion about the errno and delay stuff is... I don't like it. It
seems to be somewhat correct to ask again on EAGAIN corresponding to the man
page, but we could do it in a cleaner way if possible.
Maybe someone sees a problem in my proposal. The only thing I see is that the
read should appear before locking the current buffer or a "bad person" (me)
could delay the new buffers for all others by connecting and then wait for a
long time. I see the same problem in your current implementation.
Regards,
Sven
If I can chime in with my 5 cents, On 15 Jul 2009, at 20:50 , Sven Eckelmann wrote: >> Thanks, is this better? > There are different things I don't understand complete. How does the > receive > buffer affect the output buffer in this case? Without spending a day rooting through the syscall implementation I'm not sure why this is the case. Neither the documentation nor Google has been much help beyond revealing that: * It is a widespread phenomena where writes are getting truncated when there is unread data on the socket. * There exists many opinions and precious little consensus on the precise semantics of read(2) and write(2) ! Oy vey. > Isn't it possible to disable non-blocking sockets in vis.c:635-638 > and change > the read to following for json: Try to recv data until the read > fails or an > empty line (please check HTTP rfc before implementing) appears. If > the read > fails -> discard everything. If the empty line appeared then send > the data. > You should create an extra function to read (and discard) the header > stuff. If everyone else thinks this is a good idea I would also greatly prefer to modify the vis server to use blocking sockets. Given that it is a threaded implementation it is not immediately obvious to me why this approach was chosen originally unless the threading got added later? Thoughts anyone? > My personal opinion about the errno and delay stuff is... I don't > like it. It > seems to be somewhat correct to ask again on EAGAIN corresponding to > the man > page, but we could do it in a cleaner way if possible. *nod* It _is_ ugly. It is simulating blocking behavior on a non-blocking object which, really, means that one should ideally just be using a blocking object in the first place! :-D > Maybe someone sees a problem in my proposal. The only thing I see is > that the > read should appear before locking the current buffer or a "bad > person" (me) > could delay the new buffers for all others by connecting and then > wait for a > long time. I see the same problem in your current implementation. I'm not sure the current implementation performs a read between locks but I agree that the server should not block other connections while waiting for a response from another connection. Thank you for the code review Sven! - antoine -- http://7degrees.co.za "Libré software for human education."
> > Maybe someone sees a problem in my proposal. The only thing I see is > > that the > > read should appear before locking the current buffer or a "bad > > person" (me) > > could delay the new buffers for all others by connecting and then > > wait for a > > long time. I see the same problem in your current implementation. > > I'm not sure the current implementation performs a read between locks > but I agree that the server should not block other connections while > waiting for a response from another connection. I didn't wanted to say that the current implementation in trunk uses a read, but Jonathan Mzengeza's patch does it. Best Regards, Sven
On 16 Jul 2009, at 09:36 , Sven Eckelmann wrote: >>> Maybe someone sees a problem in my proposal. The only thing I see is >>> that the >>> read should appear before locking the current buffer or a "bad >>> person" (me) >>> could delay the new buffers for all others by connecting and then >>> wait for a >>> long time. I see the same problem in your current implementation. >> >> I'm not sure the current implementation performs a read between locks >> but I agree that the server should not block other connections while >> waiting for a response from another connection. > I didn't wanted to say that the current implementation in trunk uses > a read, > but Jonathan Mzengeza's patch does it. It _is_ Jonathan's patch I'm referring to :-) As it stands his code does not delay the new buffers for all others if someone connects and then waits a long time. But that is a choopchick and not really important one way or another, I think we both agree that switching to blocking sockets is a promising idea. Does anyone with insights into the gooey innards of vis.c have any thoughts about this strategy? - a
On Thursday 16 July 2009 15:45:19 Antoine van Gelder wrote: > But that is a choopchick and not really important one way or another, > I think we both agree that switching to blocking sockets is a > promising idea. > > Does anyone with insights into the gooey innards of vis.c have any > thoughts about this strategy? I did some little research and noticed that the non-blocking clients were introduced by me in revision 491. After reading the commit message I roughly remember the reason for the change to non blocking: We were working on the 3D visualization tool s3d and could bring the vis server to a standstill by running the TCP client (meshs3d) in gdb and stopping its execution. The TCP client was not killed but the client would not read from the socket, hence the TCP connection was still open but the TCP write call would not come back either and hang forever. This solution was a quick fix which probably is far from being perfect. While searching for some info on that topic I found an interesting page which might prove helpful: http://blog.netherlabs.nl/articles/2009/01/18/the- ultimate-so_linger-page-or-why-is-my-tcp-not-reliable Regards, Marek
On Friday 17 July 2009 15:10:55 jonathan mzengeza wrote: > I looked at the site and it talked about something I had already tried > which works well. Here is another patch hope its better. > > + shutdown(thread_data->socket, SHUT_WR); Does your patch also work without the shutdown ? Some clients (e.g. s3d) read the constant stream of vis data to update their visualization without re- opening the connection. Regards, Marek
Index: vis.c =================================================================== --- vis.c (revision 1343) +++ vis.c (working copy) @@ -566,6 +566,7 @@ buffer_t *last_send = NULL; size_t ret; char* send_buffer = NULL; + char tmp[512]; while ( !is_aborted() ) { @@ -579,6 +580,11 @@ send_buffer = current->dot_buffer; } else { send_buffer = current->json_buffer; + ret = read( thread_data->socket, tmp, sizeof( tmp )); + while ( ret == -1 ) { + ret = read( thread_data->socket, tmp, sizeof( tmp )); + usleep(250); + } } ret = write( thread_data->socket, send_buffer, strlen(