Tuesday, February 4, 2014

Making Peace With Comcast Triple Play

I have finally given in.  After years of refusing to sign up for a Comcast Triple Play package, I have consumed the Kool Aid.  And I have to admit, I am happy, but it took a while to get here.

My objection to the Triple Play is Comcast Voice, or more specifically, the loss of control over my equipment as a Voice customer.  I first got internet from Comcast July 3, 1998 (at a then life-changing download speed of 256 kbps).  I quickly understood the economics of renting a cable modem from Comcast and bought my own.  I have upgraded several times and worked through several routers as well over the years.  Besides saving money, it is nice to control my own upgrade destiny.

A few days after signing up for my new package, I received the "wireless gateway" that Comcast wanted me to use, a Technicolor TC8305C.  Technicolor?  Really?  Not even Arris?  I confirmed through Comcast that the device supports DOCSIS 3.0, but was immediately disappointed to see that it does not support IPv6.  Powering up the unit and connecting to a laptop only worried me further.  Only a 2.4 MHz radio, no guest network and very limited firewall configuration.  It was also unclear whether the DHCP server supported address reservation.

Oh well, I figured, I'll just disable the router and use the device as a telephony modem.  No such luck.  The bridge mode cannot be enabled by the user.  Several people reported on Comcast forums or DSL Reports having problems getting Comcast support to enable bridge mode, and that the modem left bridge mode after being power cycled.

Based on posts indicating that Comcast allowed customers to own telephony devices, I ordered an Arris TM822G through Amazon.  I took the Technicolor device to my local Comcast office and told them I was going to use my own modem.  I was told that, unlike the cable modem, my Comcast franchise did not allow consumers to purchase and use their own telephony device.  After some discussion about the inadequacies of the router in the device, the service rep brought out an assortment of telephony modems that Comcast rented.  The only one that was DOCSIS 3.0 was a Ubee DVM3203B.  Ubee?  I thought Technicolor was bad.

My options having run out (other than dropping the Triple Play before even activating it), I took the modem home.  I spent a little under an hour trying to activate it at comcast.com/activate.  The DNS server Comcast configured did not even resolve comcast.com.  Yikes!  I tried IP addresses and using curl or telnet to connect, but had no success.

Calling 1-855-OK-BEGIN connected me to a young woman who was confident we'd be up and running in ten minutes.  Let me assure you, it was more like thirty minutes.  Things started out smoothly, with me reading MAC numbers off the modem and us running through power cycling the modem.  Multiple times the modem light sequence never got to the expected state.  When we finally got the lights the way they were supposed to be, my phone had no dial tone and my computer could not get initialized via DHCP.  The young woman did something to "activate" the phone line, and one last power cycle got me both a dial tone and an internet connection that allowed me to browse.

Before I forget: I had to wade through many posts with incorrect information before I found that the username/password to access the cable modem status (at the standard 192.168.100.1) is admin/cableroot.  I was reassured to see that all eight download channels and three of four upload channels were active.

With that done, I expected to swap in my router for my laptop, power cycle the router, and be up and running.  No such luck.  Only after multiple power cycles of both the modem and the router did the router finally get an IP address.  It was a 50.x.x.x instead of the 71.x.x.x that I had had for years, but who cares about the public IP address when you are finally up and running again.

Well, up and running is an overstatement.  Browsing felt like I was working on an old dial-up modem.  I had trouble even connecting to speedtest.net, and when I did I got 0.3 Mbps download, while the upload test never finished.  I checked the modem status and the signals were all good.  Download powers were between 2 and 5 dBmV, with SNR over 40 dB.

I power cycled everything again.  This time my WAN IP address was back to its old 71.x.x.x, but speedtest readings were still horrible.  I kept running about every minute while trying to figure out what could be wrong.  Finally, about twenty minutes later, speedtest results were back to about 58 Mbps download and 11 Mbps upload, just what they were before the equipment change.  I don't know whether the slowness was coincidental or the result of changing the modem, but I was happy to have my old performance back.

It's about a month later, and the modem has worked without problems, both internet and voice.  My WAN IP address switched back to 50.x.x.x soon after the first day, which caused a couple of problems with work connections where IP addresses are white listed, but those issues were quickly resolved.

Wednesday, April 11, 2012

Reviving Twonky Media on WD My Book Live

I have a 2 TB WD My Book Live NAS drive for storage of backups and media. I love that I can write to it faster (about 45 MB/s using robocopy) than a local USB 2.0 external drive (under 30 MB/s). However, I have loathed the device, too, since the music, video and pictures I stored on it were not showing up in DLNA client software (e.g. my PS3, NetGear MP-101, Macs and PCs). The server was visible to all, but it appeared to have no media files to serve.

I confirmed the basics of the configuration multiple times. Twonky Media was enabled. Each share was flagged as sharing all media file types. I restarted the service, rebuilt the database, rescanned the device. Nothing changed. After googling, I discovered I could connect to the NAS box with ssh. There were many recommendations to do this, rename two files, then restart Twonky Media. I did so with no change in behavior. However, after a little digging, I discovered a problem with the Twonky Media configuration. Once I changed that, everything started working.

The steps I followed were:

Login to the MyBook Live at http://mybooklive/UI/login
Enable SSH at http://mybooklive/UI/ssh
Connect via ssh (username = root, password = welc0me)
cd /CacheVolume/twonkymedia
vi twonkymedia-server.ini
change contentbase=/ to contentbase=/shares
save and exit
restart twonky at http://mybooklive:9000/config under Maintenance

Depending on your network, browser and ssh client, you may need to use a numeric IP address rather than "mybooklive". If you use Windows and don't have an ssh client, I recommend putty, which is what I used.

Sunday, April 3, 2011

Why Is Flash Still Hogging My CPU?

Flash has had hardware acceleration "forever", or at least as long as I can remember. I suppose it did not back when it was always called Shockwave Flash, but in those days there probably was not much hardware acceleration to take advantage of.

The purpose of hardware acceleration is to allow Flash to do its job while using less CPU, instead letting the GPU hardware do some of the work. On a computer with a slower CPU, this may be the difference between getting smooth video or something closer to a choppy sequence of poor quality still photos. For more modern computers, this should allow the machine to produce smooth, high quality video while allowing the CPU to work on other things.

Flash 10.1 upped the ante by adding hardware decoding, which means the work of decompressing highly compressed video formats like H.264 could be performed by GPUs with this capability built in. The 10.2 iteration of Flash introduced Stage Video, which "helps websites deliver best-in-class video across screens and browsers by enabling access to hardware acceleration of the entire video pipeline."

After upgrading to Flash 10.2, I was left asking "why is Flash still hogging my CPU?" Watching March Madness On Demand from my old-but-serviceable everyday desktop (Athlon 64 X2 3800+, Windows XP SP3, GeForce 8400GS graphics), for example, pinned the CPU between 90 and 100%. Other video sites like ABC and Hulu ran at about 50% CPU, effectively monopolizing one of the two CPU cores.

I wondered whether I needed to upgrade to a newer, but still entry level, video card like a 220, 240, 430 or 440, or whether the DirectX 9 limitation of Windows XP was a problem. However, when I checked out Stage Video on Adobe's site, I ran the examples and found that 720p video scaled to full screen could run at under 15% CPU. The Big Buck Bunny demo is especially cool, as it allows you to turn Stage Video on and off to compare CPU usage.

After a little thought, my suspicion is that sites like ABC and Hulu use Flash plug-ins or other code that prevents the Stage Video pipeline from working, probably to implement digital rights management (DRM). Maybe the problems are related to the way in which the videos are encoded or streamed. In any case, other sites for which this seems to be true include Crackle, The WB, PBS and CBS.

Tuesday, February 1, 2011

Flash At Last For My Archos 32

I got an Archos 32 Internet Tablet for Christmas.  It is not really a tablet; I call it my Android Touch because its form factor and primary usage profile are similar to the iPod Touch.  My objective was to have an Android device to verify the functionality of apps I develop  The specs are reasonable for the price and Archos released a firmware upgrade in December that included Froyo, also known as Android 2.2.  I have to admit, though, that I was jealous of my son for whom I bought an Archos 70, which has a truly beautiful multi-touch screen.

The downside of having a non-phone device is that it is not Google certified and out of the box does not have Google apps such as Gmail, YouTube, and most importantly, the app Market.  I finally got around to finding a way around this and was successful enough to have Flash 10.1 running within minutes.

The magic is gapps4archos.apk.  A link for downloading it is in the forums on archosfans.com.  Because the forum post specifically talks about firmware 2.0.54 and I had already upgraded to 2.1.04, I did not have high hopes that the app would work.  Regardless, I downloaded the file to my PC. After turning off application debugging via USB on my Archos 32, I connected the USB cable between it and the PC. The Archos showed up as the E: drive in Windows. I copied the downloaded file from the PC to E:, ejected the E: drive in Windows, then unplugged the device from USB. I opened the Files app on the Archos home screen, found the file gapps4archos and tapped it. (Note that my device's application settings allow apps from unknown sources.)  In the app, I clicked the button to install Google apps. When that was done, I rebooted the Archos (held the power button down, choose Power Off, then Reboot). When it came back up the home screen had Gmail, YouTube, the app Market, and others.

I first ran Gmail, entered my Google account information, and was soon synchronizing data and settings between the Archos and Google. I then started the Market app, searched for "Adobe Flash" in the market, and chose to download Flash 10.1.  When that download was done I started the Browser app and went to addictinggames.com.  Success.  To test Flash video streaming, I went to crackle.com and was watching a trailer with just a few taps.  I was able to rotate to landscape orientation and push the Flash player to full screen mode.  OK, on a 3.2" screen, it was not an amazing cinematic experience, but it was extremely gratifying nonetheless.

Wednesday, July 7, 2010

What mlb.tv Revealed to Me About My Router Configuration

I recently re-subscribed to Major League Baseball's video streaming service mlb.tv.  I used it last August and September to follow pennant races and had generally good results using my MacBook connected via wi-fi both at home and while traveling.

This year, however, my experience was immediately horrible.  I could not get past the bandwidth/quality check that happens after the media player browser window is launched.  I verified that I had the latest versions of the Flash Player and NextDef plug-in.  I rebooted.  I tried Safari, Firefox and Chrome.  I enabled all cookies and pop-up windows. The results remained the same.  When I checked network traffic in Activity Monitor, I saw throughput peak at about 35 kBps.

To determine whether the behavior was specific to the MacBook, I tried to connect from my desktop PC.  Nothing new.  I then downloaded the mlb.tv application to my PS3.  I got some sound and a very halting video stream.  That told me very little, but at least was consistent with the small amount of network traffic I had observed.

Very late last night I resumed the investigation from my PC.  I was able to get sound and some video.  Network throughput was higher, 50-150 kBps.  I used the sysinternals Process Monitor to determine the server to which Chrome was connecting, which turned out to be hosted by Akamai rather than MLB itself.  I was surprised, however, to discover that tracert showed the server connected to Level3's network in Los Angeles.  Akamai's service should connect me to a nearby server; I am in Philadelphia and served by Comcast.

Believing that Akamai uses DNS locality to pick a server, I checked the DNS addresses in my router.  Running tracert for these showed that they were both on the Level3 network in Los Angeles.  My past experience has been that DNS servers should be local ones specified by Comcast when the router does its DHCP initialization.

Looking through the configuration for my D-Link DIR-825 router, I discovered that an option labelled Enable Advanced DNS Service was checked.  I unchecked it and rebooted the router.  The router came up with DNS addresses that I recognized as Comcast (68.x.x.x).  In fact, they matched the Philadelphia addresses listed at http://dns.comcast.net/dns-ip-addresses.php.  I immediately fired up mlb.tv, got an excellent video stream, and saw my network traffic vary from 0 to 1.1 MBps.  All is well on my PS3 and MacBook, too.

I don't recall checking the Enable Advanced DNS Service option.  It may have been added or checked during a firmware upgrade.  Had I known that it would override the DNS server information provided by Comcast and instead use a DNS server across the country, I certainly never would have selected it.

Thursday, April 1, 2010

Getting Good Results With Vonage

Frequent service outages combined with long resolution times from my land line provider (Verizon) finally convinced me to try Vonage. The current Vonage deal ($15/mo for the first 6 months) didn't hurt, either. I had some problems getting started, but after a few tweaks, my service is running smoothly.

My home (Ethernet) network uses a D-Link DIR-825 router with a Motorola SB5100 to connect to the Internet via Comcast. Besides some switches used to distribute Ethernet throughout the house and within my home office, I have a VOIP device connected to the router that I use for work. My experience with that device has been great (plugged it in, flashed it, and it worked), but I was worried that I would not be able to get two VOIP devices to play nicely on my network.

I signed up for Vonage through their web site, moving my land line phone number to the Vonage connection. It was 7 days before the number was moved.

Vonage provides a device they call V-Portal for free as part of establishing service with them. The installation instructions place this between the cable modem and existing router. I reluctantly proceeded this way and quickly discovered that the V-Portal includes an embedded router and firewall.

Within 15 minutes I had come upon a deal breaker. I use PPTP VPNs to connect to various networks for work. Going through my D-Link and the V-Portal, I was able to establish VPNs, but within 2-3 minutes the VPN was dropped.  This is presumably a double-NAT issue, since data was traversing two routers.

My response was to reconfigure my setup to have my D-Link router connected to the cable modem with the V-Portal plugged into an Ethernet port off of that. My VPNs stopped dropping and the V-Portal still worked with my phone. The only negative of this configuration is that I had to connect a computer to the LAN port on the V-Portal to access the web management interface and enable management through the WAN, since the WAN interface on the V-Portal is the one that is accessible from my home network.

With the networking side appearing to work, I prepped the home phone network for Vonage. This simply meant disconnecting the network from the provider interface. In other words, I removed the cable coming from outside the house from the interior connection junction for my phones. This put all the phones in the house back in business.

One remaining problem slowly became apparent. During a call, audio on our side of the call would go dead for one to several seconds. The people to whom we were talking claimed no similar problem.

The most likely culprit seemed to be dropped outgoing packets. My initial cynical conclusion was that Comcast somehow determined these were Vonage VOIP packets and intentionally dropped them.

Looking for a solution within my reach, my attention turned to the D-Link router. It is a well regarded product with many features, any of which could be causing a problem. Browsing the options within the Advanced tab of the configuration application yielded many possible culprits and an equal number of promising options for a solution.

My first forays were changes to the WAN Shaping configuration, all with no efficacy. My attention then turned to the QOS engine. When I looked at Internet sessions, I could see that connections for both my work VOIP and Vonage devices had higher numerical priority values than other connections. Reading the on-line help indicated that lower numerical priority values were actually given higher priority. That sounded wrong (VOIP should have higher priority). I considered setting up a rule to force a lower numerical priority for Vonage, but instead simply disabled the QOS engine. Based on our experience since, that seemed to be the ticket: no more audio dropping during calls. Having QOS turned off has not adversely affected on-line Call Of Duty play in any noticeable way, either.

Thursday, October 15, 2009

Winding Road From Hi8 To H.264, Part 1

I got my first video camera (aka camcorder) back in 1992. I don't recall whether digital recording was available in consumer devices at that time, but I chose one that recorded Hi8, an analog format.

My wife, dog, and I were living in a rented house in Princeton, NJ. During the week we each had our respective long commutes, so the cam was used mainly on the weekend to record our dog and the deer that wandered into our backyard. Years passed, we moved, bought a house, and had a son. Although I was by no means an avid recorder, by 2002 I had a couple dozen two hour tapes lying around.

Watching the tapes was more trouble than it was worth. The main issues were (1) I had done only a so-so job of writing down what was on the tapes (2) the camcorder was painfully slow rewinding and fast forwarding and (3) each tape had highlights separated by footage that was not especially interesting to watch.

In order to capture the highlights and create movies that could be burned to DVD, I bought Pinnacle Studio 8 AV. This bundle included the Studio 8 software for capturing video from an analog source, editing it, then creating a finished product, either a video file for the computer or a burned DVD. Along with this came the DC10plus, a PCI card to capture video from an analog source.

The capture was pretty decent. It produced AVI files with MJPEG encoding. The quality was generally about as good as the source in terms of resolution (608 x 464) and clarity. I believe the captured video was interlaced the same way as the source, but the codec was good enough to play this back without nasty artifacts.

However, the capture was not without issues. First, it needed to be done without other applications running, otherwise frames would be dropped. Second, it produced files that were, especially in those days, huge. A 9 minute capture required a 1.6 GB file. My rig at that time had something like a pair of 40 GB hard drives.

To the extent that I had the patience, though, I could locate highlights on my tapes, capture a few minutes of each, then compose DVDs. Locating sections of tape that I wanted to capture was still the biggest pain, but I did create a few DVDs. The quality of the DVDs was decent. Studio 8 encoded to MPEG-2 at up to about 6 Mbps. That's not as good a commercial DVDs, but reasonable considering the quality of my source content.

About a year ago I bought a MacBook for my birthday. One of my desires was to use iMovie to create more videos, from both the older (pre-2002) tapes and the ones I had recorded since then. I figured with the larger hard drive I could work with digital versions of entire tapes, which would make selecting footage much easier.

Little did I foresee the problems I would have. I captured a full tape on my PC (which by this time was a new model with considerably more disk space), copied it to the Mac and tried to open it. It would not play. Some research indicated that I might need a different codec for MJPEG. I downloaded every MJPEG codec I could find for the Mac with no success.

Since the Studio 8 software can create other formats for output, I investigated the possibility of those. As far as I could tell, MPEG-2 was not an option for input to iMovie. The resolution and bit rates available for WMV output in Studio 8 were not acceptible. Intel's Indeo codec was not supported on the Mac, at least not for the version of iMovie I was using. The Cinepak codec produced good movies on the PC, but the audio was MIA on the Mac. That was almost a blessing, since the Cinepak rendering on my PC was glacial. The only output I could produce on the PC that iMovie could properly read on the Mac was DV rendered within an AVI.

Rather than solving all my problems, this yielded new ones. The DV files were larger than the MJPEG, which is not surprising since DV has no compression. Four minutes of video produced a file of about 900 MB. Worse, there was an iMovie incompatibility for some files produced by Studio 8. The original AVI spec supported files up to 1 GB. While the format for larger files eventually became standardized, it appears that the output from Studio 8 does not follow this. I am not sure the exact nature of the incompatibility, but I could see in the output from the GSpot application that the larger AVIs had a structure that confused it somewhat as well.

I did some work with creating DV-AVI files from my captured video, but it was still an unpleasant process. I recently set out to find a better method, which I will discuss in my next post.