GigaNews problems

Earthlink tore up their own Usenet server farm a long time ago. For customers who were around at that time they have provided Usenet service through Giganews. This has worked well for the most part. But there are problems.

Email notices

It used to be that I would occasionally receive an email from Giganews telling me that I was approaching my data limit. (35GB/month) No hint was ever provided as to when that email was sent out. After some effort I found it to be about 25GB.

Then the emails stopped. They don't even appear in the known SPAM folder. I suspect that they have fallen prey to yet another level of SPAM filtering. At some point the email headers started showing that all incoming email was first passing through servers at vadesecure.net. Giganews had been altering (most would call it forging) the headers so it looked like they were coming from Earthlink. This likely triggers this extra (silent) level of filtering.

Missing article bodies

I have used an old Perl script (aub) to assemble multi-part binary files for over twenty years. It works well enough for my needs. Then a year or three ago it began having trouble.

aub first downloads the headers for new messages and then scans the subject line looking for files to assemble. If it finds all of the parts for a file, it then requests the message bodies using the NNTP BODY command. This began to occasionally return a "430 no such article" error. Very odd considering that the server had just provided the header.

This wasn't a big problem at first because most files had forward error correction (parity files) available. But aub would throw away an entire file when this error occurred so when the frequency of this problem increased, it reached the point where sometimes parity files weren't enough. I dug into the Perl code and modified it so that it would just continue on with what it had. This was sufficient for a while.

I rarely use nzb files but when I do it is with the nzbperl program. This is fairly smart and can recognize that it already has a part of a file and skips downloading it. So I tried running it a second time and sometimes this picked up the missing files. Showing that the problem was not only random but transient. Digging into the docs revealed that it had a keepbroken option that helped even more.

Bogus responses to the GROUP command

More recently I began seeing the occasional bogus response to the NNTP GROUP command. The client sends this to select a particular news group to read. The reply includes an article count and the first and last article numbers. Sometimes the first and last article numbers were given as 2 and 1. Normally when a group has no articles available these are equal so a first greater than last is very odd.

A later attempt would get the correct values. Then I noticed a variant of the bogus reply where the last article number dropped to a lower value. For example:

      211 22714 1636319 1659032 alt.binaries.sounds.mp3.dr_demento
      211 22714 1636319 1659032 alt.binaries.sounds.mp3.dr_demento
      211 22714 1636319 1659032 alt.binaries.sounds.mp3.dr_demento
      211 22714 1636319 1659032 alt.binaries.sounds.mp3.dr_demento
      211 22561 1636319 1658879 alt.binaries.sounds.mp3.dr_demento
      211 22714 1636319 1659032 alt.binaries.sounds.mp3.dr_demento
      211 22561 1636319 1658879 alt.binaries.sounds.mp3.dr_demento
      211 22714 1636319 1659032 alt.binaries.sounds.mp3.dr_demento
  

I finally decided to complain about the problem with the BODY and GROUP commands. Not to Earthlink customer support because they wouldn't have a clue but directly to Giganews. I didn't have much hope that this would result in any reply at all since I don't really have an account with them. But I did get a reply.

After banging my head against the first level customer support for a while, it was finally sent on to the next level. Not that it helped much. I explained that this was an occasional, random, and transient problem. (Both of them.) The problem was referred to their "engineers" (I can sneer at that since I have an MSEE) reported back in less than a day that they couldn't see any trouble. And the problem was declared solved.

Data

Since they couldn't (or wouldn't) find the problem, I decided to start some data collection. (I had suggested that this was the sort of thing they ought to be doing.) I created a program that would log onto the server and issue a single group command. (For the Dr. Demento group.) Then record time stamped results. This was set to run about every 30 minutes on my Raspberry Pi. (Along with its other data collection tasks.) Then it creates a graph using the last article number.

graph

I sent another email to Giganews customer support pointing them to this graph. Maybe it will help. If not, then this page stays up in the hope that it will have some effect.

December 2021

The missing articles problem is worse. To illustrate it I had to use a group a bit more active than for Dr. Demento. (Down to zero articles available.) When I look at an active group (pornstars.80s in this case) I see trouble. If I subscribe to it using Thunderbird I see an interesting phenomena. It will report that it has found some number of new articles (1,000s) but when I let it download the headers, it finds fewer than that number.

So I wrote a program to probe the problem. The program accepts a group name on the command line. It selects that group, looks at the available articles, subtracts some value from the last article number, and issues an XHDR command.

I use XHDR just like aub to get the subject lines. After that the program then goes through the list and gets the full headers one at a time using the HEAD command. If that reports an error then it is printed.

I was expecting to find a mismatch between data returned by XHDR and HEAD. But I also found another problem. Completely missing articles.

Just after new articles appear, some don't seem to exist at all. As an example, shortly after over 6,000 articles were posted I ran this program. It found the last article number, (from the response to the group command) backed up 2,000, and requested the data. This should have returned 2,001 articles. Instead I received 1,792.

Just to verify that it wasn't a problem with my code (the read line routine has been a problem) I used telnet (see below) to verify. In part:

34238530 Sue Nero - 27 clips "(Sue Nero 27).avi.001" yEnc (52/77)
34238531 Sue Nero - 27 clips "(Sue Nero 27).avi.001" yEnc (61/77)
34238532 Sue Nero - 27 clips "(Sue Nero 27).avi.002" yEnc (59/77)
34238535 Sue Nero - 27 clips "(Sue Nero 27).avi.vol54+69.par2" yEnc (14/17)
34238536 Sue Nero - 27 clips "(Sue Nero 27).avi.001" yEnc (54/77)
34238537 Sue Nero - 27 clips "(Sue Nero 27).avi.vol54+69.par2" yEnc (15/17)
34238539 Sue Nero - 27 clips "(Sue Nero 27).avi.003" yEnc (56/66)
34238541 Sue Nero - 27 clips "(Sue Nero 27).avi.001" yEnc (63/77)
  

The number at the beginning of the line is the article number and there should be no gaps. Except that there are four missing here. (I tried using the HEAD command on one of the missing article numbers and got a 423 error.)

It appears that the server is receiving articles, assigning an article number, storing it, then being unable to find it. Until later. Maybe. I tried again 24 hours later with the same result: 1,792 article retrieved instead of 2,001.

5 Jan. 2022

Performance is quite variable. A couple more groups of articles were posted with one seeming to do well but the group today not so much. Thunderbird retrieved fewer than the number of articles it first claimed were available. Running my program resulted in part:

    34264349 The Golden Age of Porn - Giant Juggs "The Golden Age of Porn - Giant Juggs.mkv.vol165+154.par2" yEnc (08/94)
423 no such article in group

34264350 The Golden Age of Porn - Giant Juggs "The Golden Age of Porn - Giant Juggs.mkv.vol077+088.par2" yEnc (15/55)
423 no such article in group

34264351 The Golden Age of Porn - Giant Juggs "The Golden Age of Porn - Giant Juggs.mkv.011" yEnc (102/130)
423 no such article in group


  

A few hours later most of the missing articles had magically appeared:

$ ./missing alt.binaries.erotica.pornstars.80s
group alt.binaries.erotica.pornstars.80s
211 123337 34141185 34264521 alt.binaries.erotica.pornstars.80s
found 2001 headers
34263137 The Golden Age of Porn - Giant Juggs "The Golden Age of Porn - Giant Juggs.mkv.001" yEnc (115/130)
430 no such article
  

Except for the one that returned a 430 instead of a 423 error.

Telnet

If you happen to be using a newsreader that isn't giving you quite the amount of information you desire, you can access a server using telnet. The trick is to specify the correct (NNTP) port: "telnet news.west.earthlink.net 119" in my case. Then provide your login information and type in commands. You will want to have a guide to the NNTP commands handy. Avoid using the ARTICLE or BODY commands in a binary group.

9 August 2022

Seeing other problems. For a while I had a problem where attempting to fetch a particular article would just time out. I gave up on that for a week or two and when I tried it again after that it worked. Then there was the "500 server error"

My biggest problem right now is that something is screwed up with the usage limits. This had been a monthly limit of 35GB. Which changed from being aligned with the calendar month a long time ago. Instead reseting on the 16th. Except in July I received the bandwidth exhausted error beginning on the 16th. I might have used 35GB in the prior 30 days but not in any shorter period.

Complaints to Earthlink eventually resulted in a call from a level 3 support person (who you would expect to know something) who had no clue what Usenet or NNTP was. Then silence.

5 October

A lot has happened. The first is that after going completely dark for a while, the server reappeared with a new IP address. Instead of being in the Earthlink subnet like it had been (216.168.4.170) it was in the Giganews subnet. (69.80.103.38) This is consistent with other Earthlink actions like shutting down their DNS and SMTP servers. The later being contracted out to the idiots at VadeSecure. After a while it actually started to work. Sort of.

The most obvious problem was frequent, as in more often than not, login failures. The error was 481, invalid login name or password. Retries eventually succeed. I think this is also responsible for errors while posting articles.

The other error which has rendered it nearly useless is random failure to respond to the NNTP BODY command. This is used to retrieve an article body. Options are an error message or the article body. But instead there are random timeouts when the server doesn't respond at all. This occurs with a couple of Perl scripts and might be happening with Thunderbird. It is harder to tell with Thunderbird as it isn't providing any details.

Earthlink customer service

Getting anything on this out of customer service is an exercise in frustration. The bottom level agents have no clue about NNTP or news.west.earthlink.net. Nor the level up. Not even level 3 or "executive escalation" agents.

Given the new IP address the problem is almost certainly with that Giganews server. So the goal is to get someone at Earthlink to call Giganews.

8 October

A new problem today adding to the others. I have started seeing "connection refused" errors. I used Telnet to good effect here:

      $ telnet news.west.earthlink.net 119
      Trying 69.80.103.39...
      telnet: connect to address 69.80.103.39: Connection refused
      Trying 69.80.103.37...
      telnet: connect to address 69.80.103.37: Connection refused
      Trying 69.80.103.38...
      Connected to news.west.earthlink.net.
      Escape character is '^]'.
      200 News.GigaNews.Com
  

Additional tests showed that all attempts to connect to the .37 or .39 end points were the same: connection refused. So why is the DNS configured to return those IP addresses?

23 October

A new bit of weird this weekend. The response to the NNTP GROUP command includes information on how many articles are available in that group. Plus the first and last article number. This weekend the number of articles available for at least one group suddenly jumped. Then went back to one (essentially zero) as it had been for months. More checks made it appear that which version of the first article number I got was random. Sometimes one, sometimes a lot.:

      1666450992 211 1 1678602 1678602 alt.binaries.sounds.mp3.dr_demento
      1666451140 211 17669 1660934 1678602 alt.binaries.sounds.mp3.dr_demento
  

The first number on the line is the Unix time, 211 is the NNTP message ID. The only randomness was in which result I received. That inflated number of articles available was either 1 or 17669.

I wanted to check and see if there really were older articles available (retention at Giganews had been sliding for a while) but I ran into the login failure problem so wasn't able to.

I haven't seen this on many groups but then I haven't tried to check very many. Perhaps this is related to those additional IP addresses they added a couple of weeks ago. Or something else new. I don't know because I haven't heard from Earthlink for a while. Other than a request for a screen capture (from a terminal program?) there has been nothing.

23 December

Another new symptom. For several days I have been getting SIGPIPE errors which of course terminate the session. Once the timing was right so that the recv() call returned some extra information: "Connection reset by peer". Showing that the NNTP server had for reasons known only to itself terminated the TCP link. These things seem to go in phases so maybe this will fade to the point where it isn't a problem. I wouldn't bet on it.

31 December

I rarely use NZB files but as an experiment I tried using one with nzbperl. It is configured to use two connections which makes starting it a problem. With the all too frequent login failures it is difficult to get two good connections.

But after getting past that, it began downloading two files. Then it reported a remote disconnect on one connection while the other continued. A couple of minutes later it also reported trouble. First a "500 server error" (one of the many ways it can say it failed to find the article body even though it had found the header just moments before.), then a remote disconnect.

Reconnection failed but the error message is "authentication required" so this might be a flaw in nzbperl.

This answers one of the questions I have gotten from Earthlink customer service. Could this be a network problem? If so, it is a very peculiar one. Dropping one TCP connection while letting another continue. Looks more like a server error.

1 January 2024

Today the login failures were continuous with not one success. Hard to tell but it seems as though something really bad has happened. Like Earthlink terminated their deal with Giganews without telling anyone. That would be consistent with the way they ended personal web space. But I am tilting at the windmill of Earthlink customer support anyway.

3 January 2024

Usenet access began workig again. Even better than before as I haven't had a single login failure. So maybe they fixed that old problem as well. They couldn't when I complained before for some reason. No word from customer support of course.

Home