Thursday, April 28, 2016

Who called stopPropagation?

When dealing with mature JavaScript systems these days, there are a lot of moving parts -- frameworks, levels of indirection, shadow DOMs, minified code, and so forth. Today I was debugging an issue in which a third-party component was not able to "hear" an event at the top level of the document, even though the event was being fired. I figured someone was intercepting the event and canceling it, or calling stopPropagation(), or something. I was struggling to figure out who. There was a lot of code to hunt through.

If only I could set a breakpoint on stopPropagation() ... and I could!

If you want to play along, you'll need the following three files:


If you load the HTML page in a browser, you'll see a simple UI. Clicking the button will change the message to "bubbled!" The click event flows up through the DOM until bubbles up to expectingClick; it is handled by the event handler added to expectingClick by script.js and changes the message.

But if the page is loaded with ?mischief as the query string (you can click on the "Make mischief" link to do this), an intervening event handler stops the propagation of the event. So the message is changed to "clicked!" by the inner-most listener, but then never changes to "bubbled!" That's the situation I had to try to figure out.

I was using Google Chrome's Developer Tools to debug the issue. So I could set an event listener breakpoint on the event.


Once I did that, I generated the event and the debugger paused on the entry point of the event listener (line 2 of script.js, in our example). I then created the following function by typing into the Chrome console:
window.breakBefore = (function(was) { return function() { debugger; return was.apply(this,arguments); } })
This function takes an existing function as an argument and creates an equivalent function that adds the debugger keyword before invoking the original.

At that point, I could replace the stopPropagation method of the event with my version (while paused at the breakpoint):

e.stopPropagation = window.breakBefore(e.stopPropagation)

Now, if I unset the click breakpoint and continue using the continue button:


I will reach my breakBefore function, showing the call to stopPropagation, and the stack trace showing the route to the invocation will be displayed!

 

I was surprised it worked, and I'm sure it saved me hours. Hopefully it will do the same for you!

Saturday, June 8, 2013

When VirtualBox keeps re-setting your Windows 7 host-only networking IP address to the autoconfiguration address

I've had this problem intermittently over the life of my VirtualBox installation, and others have reported it too.

Basically, on every host reboot Windows 7 would re-set the IP address of my host-only network interface to the autoconfiguration address, which caused virtual hosts to be unable to see it (and unable to get their own host-only IP address, which, at least on FreeBSD guests, basically caused a very long pause during the boot process, perhaps because I was also mounting host directories on the guest machines).

Solutions people have proposed for this are all over the map, so my solution may not work for everyone.

For me, I came up with an easy fix that worked.

VBoxManage list hostonlyifs
to get the name of the host-only network interface.

VBoxManage hostonlyif delete <name>
to delete that interface

Then:
VBoxManage hostonlyif create
to create a new one

Then:
VBoxManage list hostonlyifs
to get the name of the new network interface

Then:
VBoxManage hostonlyif ipconfig <name> --ip <ip>
to set the IP address to the address I wanted.

Now it works. I hope it's this easy for you!

Thursday, January 10, 2013

A more rigorous redistricting analysis

W.D. McInturff, who apparently is a Republican pollster, today published an analysis that is so misleading, so simplistic, I can't believe people would take it seriously. Yet Chuck Todd, a smart person, tweeted about it approvingly. NBC News published an article about it. If Nate Silver got his hands on it, he'd eviscerate it. But he hasn't yet done that, so it falls to me.

McInturff's basic mistake is simple: he compares the share of votes a party got in an election to the percentage of seats it won to determine whether a party had an advantage in that election. But that's stupid. Ronald Reagan, while winning 59% of the vote in 1984, didn't win 59% of the states. He won 49/50, or 98%. Is that because the state lines were drawn unfairly in 1984? That Reagan gained a 39% advantage because of state lines? No. It's because when you aggregate votes, of course the leader will win a higher percentage of districts than votes. Comparing the percentage of votes to the percentage of seats is like comparing a sports team's winning percentage to the percentage of points it scores in its games. They're not comparable.

It's more insulting that he then takes such a snide tone -- titling his essay "There's No Crying in Redistricting" -- for such misleading analysis. Whether it's intentionally misleading or just slothful, I have no idea. McInturff's analysis basically argues that, sure, Republicans might have had an advantage in the recent House election due to redistricting, but it's nothing special. He mocks anyone who might be concerned that voters' will is being frustrated by poorly drawn district lines, portraying them as a bunch of whiners. He leaves out seven of the 21 elections during the period he analyzes (1972-2012).

But the truth is, a reasoned analysis shows that the 2012 advantage enjoyed by the Republicans is very unusual, and clearly the most significant frustration of voters' will in the last 40 years (the period McInturff analyzes). He's dead wrong. Let's go to the numbers.

First, let's look at the period preceding 2010. McInturff starts his analysis in 1972. Unlike him, I look at all elections since 1972 (he inexplicably leaves out seven of them). Here's the data:

Election GOP 2-Party Vote Share GOP 2-Party Seat Share
197247.344.2
197441.533.1
197643.132.9
197845.636.3
198048.644.2
198244.538.2
198447.441.8
198644.940.7
19884640.2
199045.838.5
199247.340.6
199453.553
199649.852.2
199850.551.4
200050.251
200252.152.9
200451.453.5
200645.946.4
200844.541
201053.455.6

You can see the problem I pointed out above. Whichever party wins an election, in terms of votes, wins a bigger share of seats than votes. There are twenty elections above, and this is true in seventeen of them. Obviously these numbers are not directly comparable.

In fact, a basic statistical analysis notes that the best predictor, using data from 1972-2010, of the percentage of seats Republicans won is the following formula:

(Percent of Republican vote) X 1.9682 - 0.4943 = Percentage of Republican seats

Let's have a look at what this relationship looks like:

GOP Vote %GOP Seat %
6068.7
5558.8
5354.9
5252.9
5150.9
5049
4947
4845
4743.1
4539.1
4029.3

You can see that if Republicans win more than half the vote, their seat share generally exceeds their vote share, and vice-versa.

You can also see something else: that in a 50-50 election, Republicans are only projected to win 49% of the seats! This does suggest that Republicans suffered from a slight disadvantage during the 40 years in question, on the average. This part McInturff has right, although he calls it a "huge structural advantage," and as we'll see, it's smaller than what Republicans have now. I have no doubt that the party in control of redistricting loads the dice in their favor. But there's reason to believe it's never been done as effectively or thoroughly as it was after the 2010 census. Let's see why.

If we're looking for what would happen with "fair" district lines, and other considerations, we should adjust the formula a little bit to make it fair to Republicans. Let's change the intercept -- the 0.4943 above -- to what it would need to be to make a 50-50 election even in terms of seats, or 0.4841. This yields the following table:

Party Vote %"Fair" Party Seat %
6069.7
5559.8
5355.9
5253.9
5152
5050
4948
4846.1
4744.1
4540.2
4030.3

So now we see how this might work, if everything were fair. A party winning 60% of the vote should control 70% of the seats; 55% of the votes is about 60% of the seats, and so forth.

So now let's look at the recent elections in context. Did any party have an "advantage" -- in which they won more seats than expected -- in any election? And what does it show about 2012?

We'll look at the two-party vote share over the period in question, and look at both the expected percentage of seats won and the actual percentage of seats won, and express the difference as a number of seats (just to make it easier to understand).

Year GOP Vote % Expected GOP Seat % Actual GOP Seat % Difference (seats)
197247.344.844.2-2.3
197441.533.233.1-0.5
197643.136.532.9-15.9
197845.641.336.3-21.7
198048.647.344.2-13.2
198244.539.338.2-4.8
198447.444.841.8-13.1
198644.94040.72.9
19884642.140.2-8.2
199045.841.838.5-14.4
199247.344.640.6-17.6
199453.556.953-17.1
199649.849.752.210.9
199850.550.951.42
200050.250.4513
200252.154.252.9-5.5
200451.452.753.53.3
200645.941.946.419.8
200844.539.1418.3
201053.456.855.6-4.9
201249.448.953.821.4

So now we can see how dramatic the advantage to the Republicans was in the last election. This was only the second election in the period -- 1996 was the other -- in which the party which won fewer votes controlled the House. (McInturff erroneously concedes to NBC that this has never happened before, because 1996 was apparently one of the elections he was too lazy to include in his study, or because it undercut his thesis that Republicans do not have an advantage.)

Was it the largest deviation recorded, using this model? Not in number of seats. The Democrats had a slightly larger advantage in 1978. But in that election, Democrats were winning a major victory over Republicans, and the deviation caused them to win by a larger margin. The model predicted they'd have something like a 255-180 majority, and instead, they had a 277-158 majority. Even using the absolute measure, it's the second-largest in the past 21 elections, so it's obviously not normal, as McInturff would have you believe. (And 1978 is one of the elections he chose not to include, for whatever reason.) But 2012 is clearly the larger deviation from voters' will.

And there are good reasons to believe the 2012 election advantage is even more extreme and durable, by historical standards, than this simple model shows.

First, by most accounts, the polarization of districts has been increasing. If that's true, the slope coefficient should be decreasing, and the number of votes necessary to swing a seat should be increasing. The Republican structural advantage of 21 seats will be harder to overcome than in the past -- it will take more votes. There is a bit of support in the data for this -- from 2002-2010, the winning party underperformed the model in four of the five elections. Only in 2004 did the winner overperform -- and it should be noted that this followed the unusual mid-cycle redistricting in Texas, leading to GOP seat gains in the 2004 election. Other than that the GOP overperformed in 2006 and 2008, when they lost, and underperformed in 2002 and 2010, when they won.

Second, the model above may omit a factor -- something having to do with incumbency, or new incumbency. After the two biggest wave elections -- the Watergate election in 1974, and the Gingrich takeover in 1994 -- the next election sees the beneficiaries appear to improve their performance relative to their vote share, and that advantage appears to persist for several elections. Because I don't yet have a conceptual mechanism to explain this, I'm not building it into the model -- but there may well be something I'm missing.

Third, redistricting just occurred, and was thoroughly controlled by Republicans. We might expect that Republicans would show an advantage with the new district lines. Let's look at the average difference by redistricting cycle:

DecadeAverage GOP advantage
1972-1980-10.7
1982-1990-7.5
1992-2000-3.8
2002-20104.2
2012-202021.4

No pattern? Nothing going on here, eh, Mr. McInturff? The data are moving in one direction. And there is every reason to believe that we've achieved historic levels of partisan advantage. There's no obvious reason to disbelieve what the model finds for 2012 and beyond. A completely different methodological approach undertaken by Nate Silver shows a marked decline in the number of swing districts, just since 1992 -- from 103 in that year to 35 in 2012 -- and further shows that the tipping point district -- the district that would determine control of the House if partisan voting followed presidential patterns -- is now a district that is 5-10 points more Republican than the nation as a whole.

So, nice try. Perhaps a cursory glance at the data let you believe that nothing special was happening. Or perhaps you were just trying to justify an unjustifiable perversion of voters' will in your partisan direction. But a deeper look shows you're wrong -- we've reached a highly unusual level of partisan advantage in House elections. And it's bad for the country, making it harder for voters to exercise their will on matters of public policy.

Monday, April 9, 2012

Deploy it as a service!

I used to make fun of people who were so in love with service-oriented architecture (SOA) that they would grandly want to deploy every general-use piece of code in the entire organization as a web service (rather than, for example, creating a Java class or something).

But I just had to create a web service to CALCULATE THE LOCAL TIME because Google Chrome -- which seems to be a pretty mature product -- has some sort of catastrophic time zone bug that somehow causes its time zone handling to get confused.

The debugger console shows the following ridiculous output (obviously today at 5:44 EDT):

> new Date()
Mon Apr 09 2012 10:44:42 GMT+0100 (Eastern Standard Time)


A quick search of the Chromium bug database reveals lots of users reporting bugs that are probably caused by this.

I seriously hope they don't introduce a bug into JavaScript mathematics and make me write the AddNumbers service from my fictional SOA-infested dystopia ...

P.S. Even as I write this, all the timestamps on the Blogger edit screen show incorrectly...

Update 2012 May 16:It's a bug in how Chrome deals with the TZ environment variable on Windows. See the Chromium issue.

Saturday, October 23, 2010

Easier way to run Mercurial hgk (or hg view) on Windows (and Cygwin)

hgk, the Mercurial extension that allows the hg view command, is the graphical viewer for Mercurial repositories.

It's a bit messy to run it on Windows. The solution I came up with was easier than the others I found on the web, so ...

I downloaded the Mercurial 1.6.4 main binary for Windows (making sure to install the "contributed scripts" so that I got hgk), and ActiveState's ActiveTcl 8.4. I installed them. I then modified my Mercurial configuration as follows. You need to create one batch file in a directory of your choosing, in addition to installing these packages and modifying your configuration.

Added to Mercurial.ini (user or systemwide; probably would work with $HOME/.hgrc as well):

[extensions]
hgext.hgk =

[hgk]
path=C:\wherever\you\want\hgk.bat

Created hgk.bat (at path indicated above):
(Note that this is a two-line script: wish.exe takes the hgk path as an argument. Not sure how to get Blogger to format this better.)

@echo off
set HG=C:\path\to\mercurial\hg.exe
start C:\path\to\tcl\bin\wish.exe C:\path\to\mercurial\contrib\hgk %*

And that's it! I don't know why the official instructions are so hard. And in any case, they did not work for me. YMMV.

Thursday, October 21, 2010

Ephemeral ports

Just learned about the so-called "ephemeral port range," which for years has been causing my servers on my local Windows machine to not start. Even after killing the process that has captured the server port, for whatever reason, I can't get a Java application to be able to listen on the newly closed port. This problem - not being able to use ports even when the netstat command reports them usable - seems to be a pretty common problem, and not Java-specific: see one of these five articles for more.

Anyway, the "traditional" (who knew? After running BSD all these years, I had no idea) "ephemeral port" range is 1024-4999. I knew ports below 1024 were essentially reserved for root on Unix systems, but I didn't know there were any other allocations. In the 1024-4999 range, applications (often client applications) claim ports for their own use.

Someone generously took the time to compose a very detailed overview of, and reference guide for, ephemeral ports on many operating systems. It certainly was helpful for me.

It is possible to reconfigure the ephemeral port range on Windows by hacking the registry. But simpler for me to just change my port numbers.

Sunday, July 19, 2009

I will never understand regular expressions

Not at this rate, anyway. Consider the following code snippet (tested in Firefox 3.5). You can probably see it's an attempt to search for empty HTML tags. It ... does not work.

var pattern = /\<([a-zA-Z0-9_\:]+)((?: [a-zA-Z0-9_\:]+=".*?")*?)\/>/;
var match = pattern.exec('<span class="noread nodeleted"><input class="edit" size="40" ui:id="set_title" value="awake"/></span>');
inonit.debug.debug("Match: " + match[2]);


The output is:
class="noread nodeleted"><input class="edit" size="40" ui:id="set_title" value="awake"

Why?!? Shouldn't the non-greedy quantifier prevent us from capturing everything between the first and last quotation marks?

I messed with this for far too long, until I stumbled across a solution:

var pattern = /\<([a-zA-Z0-9_\:]+)((?: [a-zA-Z0-9_\:]+="[^"]*?")*?)\/>/;
var match = pattern.exec('<span class="noread nodeleted"><input class="edit" size="40" ui:id="set_title" value="awake"/></span>');
inonit.debug.debug("Match: " + match[2]);


The output for this?
class="edit" size="40" ui:id="set_title" value="awake"

Which is, of course, what I'm looking for. But why do I have to specify that I'm not matching the double quote character?

Maybe I'm just dumb and I'm missing something about non-greedy quantifiers. But then why does this work?

var pattern = /"(.*?)"/;
var match = pattern.exec('"Hello" "World"');
inonit.debug.debug("Match: " + match[1]);


Output:
Hello, of course.

Makes no sense to me.