Introduction to Queueing Theory and Stochastic Teletraﬃc Models, Moshe Zukerman, 2014
From the Flawed “Theory of Constraints” to Hierarchically Balancing Criticalities (HBC) – puts forth the theory that while Goldratt and Theory of Constraints is seriously flawed at the theoretical level, it has still promoted Operations Management more than anyone else has managed to do.
All I have to say is: Wow.
Some things I’ve been reading that demand follow-up:
New Yorker, June 23 12014, The Disruption Machine, by Jill Lepore
This is a critique of Clayton Christensen’s theory of disruptive innovation. I find the article both interesting and tragically flawed. There are certainly many things about this theory that can be picked apart and dismantled, but the author fails to make a case on the entire theory. I find some of the arguments akin to the attacks on the theory of evolution (and, as I was thinking this while reading the article, M. Lepore mentions that very similarity). So file this under “critiques that I need to critique), but read this article after you’ve read at least one of the Christensen books.
American Scientist, June 2014, Quantum Randomness, Scott Aaronson
Alas, this is not the start of a regular column, just a two-part article. M. Aaronson is both highly educated and intelligent, and a witty and clear writer. Read this article and try to understand it. You might find the “Free Will Theorem” of particular interest.
Suppose you agree that the observed behavior of two entangled particles is as quantum mechanics predicts (and as experiment confirms); that there’s no preferred frame of reference telling you whether Alice or Bob measures “first” (and no closed timelike curves); and finally, that Alice and Bob can both decide “freely” how to measure their respective particles after they’re separated (i.e., that their choices of measurements aren’t determined by the prior state of the universe). Then the outcomes of their measurements also can’t be determined by the prior state of the universe.
This is a fairly up-to-date article, ending by discussing 2014 research. And this is on the path to being directly used, NIST is trying to develop practical systems.
Medidata Engineering Blog, No Single Points of Failure
I wish I could recommend this as must-read, but, at least for me, it just points out the right direction without having much new to say about how to get there. So my follow-up here is simply to elucidate those better ways.
Google announced a new open-source project in a blog post
This is akin to but has different use cases compared to, say, Protocol Buffers. The biggest difference is that you don’t unpack data, you access it in its delivered format, apparently without parsing overhead.
The Github page is: https://github.com/google/flatbuffers/
I’ll definitely be taking a look at this soon. Maybe a lot of the data we deliver to endpoints will be as FlatBuffers instead of JSON or custom binary blobs or complicated Protobufs.
The impetus and target for this seems to be game developers and their desire for efficiency along with the portability. Protobufs are nice but so heavy a solution on the client side.
This is pretty cool
The best way to describe it is “interactive curl”.
Rewriting a Python library function in C drops execution time from 110 microseconds to 320 nanoseconds. That’s a respectable optimization.
So, a program finally passed the Turing Test. Cool.
Some will say this is a cheat along the lines of Parry (an early chatbot that pretended to be a paranoid person, thus providing excuses for when the program diverged radically from sensible answers), but this is a far more significant milestone.
That plus the gigantic amount of work into speech synthesis at the commercial level:
should add up to intelligent-sounding programs soon.
There will be funny sidelights along the way:
This does not mean we have AI yet. What we have are programs that, with a ton of information and clever programming, can interoperate with humans in natural ways. But that’s huge. Programs like Siri are already useful, and we will probably see voice-driven devices be the norm rather than the exception in the next 10 years.
The big question is – when will we actually have human-level artificial intelligence? There’s a hard-core minority that thinks it will never happen because insert-reason-here: John Searls is one of those very vocal people, and his Chinese room argument is still quoted today (hint: it’s nonsense – really):
There’s another hard-core minority that thinks “any day now”, and this camp is represented by people like Ray Kurzweil – see his latest book “How To Create a Mind” as an example of this thinking, which is “ok, we finally get it, we just connect a bunch of insert-thing-here and intelligence springs out of it automatically”:
Every single “AI is not possible” argument boils down to “because”. We (humans) are the existence proof – there’s no magic involved in constructing a human, we just don’t know how to do it. However, we’ve been underestimating just how hard intelligence is to create from scratch for almost 3000 years.
I studied AI in college, and I even named my first company as an AI homage. However, I have to agree with people like Scott Aaronson, who, when asked, said “a few thousand years”:
I don’t know if I agree with the “few thousand years” timeframe, but on the other hand I think it’s very possible that we don’t understand anything meaningful about intelligence yet. One reason I feel comfortable in saying that is that most peoples’ answers about how to create AI start saying things like “emergent behavior” and “we just need 100 billion artificial neurons”. In other words “magic happens”.
You don’t make things with magic. Really.
When do i think we could have AI? Honestly, there’s no good way to estimate it. I can give a counter-estimate, which is “absolutely not in the next 100 years”. One of the things that holds us back is that we actually can’t define it yet. This means that as milestones happen, the rabid pro-AI crowd claims: “look, we have AI now” and the rabid anti-AI crowd says: “No you don’t, it’s a trick, intelligence means new-definition-inserted here”. Remember, in the 1950s, we thought that a chess-playing program would be AI. Of course, it turned out that all you need is a brute-force search program with some clever optimizations and a large memory, and you can even beat the world champion. We didn’t learn anything positive about intelligence from writing chess programs.
I think we have lots of wonderful technology as a result of our quest for AI. But at best we have intelligence-amplifying programs, not intelligent programs.
I’m happy to be proven wrong. Unlike some, an artificially intelligent program won’t threaten my sense of being. At best, it will have the same effect as knowing that really smart humans exist: some jealousy and envy, but I go on with my life and do stuff.
I for one welcome our new robot overlords. But they’ll be a very long time in coming, and when they get here, they almost certainly won’t be interested in being our overlords, just like I don’t want to be the ruler of a kingdom full of two-year olds.
Over the past few years, I’ve written a fair amount of text using HTML-based programs like MediaWiki, Atlassian’s Confluence, or Markdown. I’ve used Microsoft Word more than I care to admit. And while I can appreciate the niches that both Illustrator and OmniGraffle fit, and use both moderately heavily, neither is a general-purpose drawing program. Frankly, the entire crop of writing and drawing programs I’ve used in the past 10 years suck.
I miss WriteNow, FrameMaker, MacDraw, Canvas. I miss the promise of OpenDoc (that was so badly betrayed by its lackluster implementation). And while I’m a big booster of open source, none of the open source writing and drawing programs are very usable, stable or powerful.
Everyone writes, everyone draws, and everyone communicates. So, while the market for the tools I want is niche, I think there’s a huge market for a tool suite that can cater to all of us. WriteNow was a very good start for something that was easy to use and could produce decent results. FrameMaker was awesome, and while it had a learning curve, it wasn’t insanely steep like with Adobe Illustrator.
This is not a technical problem any more. Computers are far more powerful and software engineers far better at their craft compared to 30 years ago. It blows my mind that no one has produced a great writing program. It also is mindboggling that while we have phenomenal painting tools, we don’t have any great drawing tools. I mean, people use Visio to do their vector graphics! There’s something wrong with a world where that happens.
I would write these, except then I would be even farther from working on the projects I really want to do. We have the user expertise to just sit down and do these.
Part of the problem is that most of the effort goes into web-based tools that create HTML content, and HTML content just isn’t rich enough for book-level layout, not at the source level. You can render something into HTML that looks decent, but it would be too verbose to actually write it that way. And since the effort is going in to web-based tools, the drawing side uses SVG, and SVG is still immature, both in rendering and in capability.
Hmm, this just turned into a rant about the idea of separating content from presentation, which was the entire premise of markup in the first place, going back to at least the 1960s (my first experience with markup and writing was with Wordstar), and which drove the development of SGML – which most of you only know because its most famous sibling, HTML.
Here’s the killer program that needs to be written – we have very powerful computers that can do pattern recognition, so take advantage of it and create writing and drawing programs that can infer your structure from examples, letting you write in whatever form you want but be able to take advantage of both working at a very denotative manner (“hmm, I’ll make this Bodoni 18 point bold because I need this to stand out as a chapter heading”), but have the software be able to figure that out (“gosh, a lot of this text is in 12-point strung together in paragraphs, interspersed with some larger text set aside with whitespace, that must be a heading and that other thing is the first subheading, and I’ll just infer that structure”). So that half-way through, when you want to replace Bodoni with Palatino, you just do it annotative in the structure that was built for you. Or, you tweak the structure, or you completely replace it with the New Yorker style guideline because you’re submitting part of your magnum opus as a magazine article, or you take 20 of your books and completely reformat them into a consistent whole with a few simple operations.
Someone please make that, so I don’t have to.
And then, make a drawing program that’s the same way. I’d love to draw on a tablet and have my pitiful lines turned into the correct regular images, and I’d like to be able to apply styles to drawings the same way that we apply style sheets to text, and I’d like the drawing program to be able to figure out the difference between a connective line that sticks to a shape, and a decorative line that nonetheless should be grouped with related objects even if I don’t drag-select when I move. And of course when I put graphics into my text, the text and the graphics should know about each others’ visual properties so I can apply consistent styles automatically.
Someone make that too, please, so I don’t have to.