694 stories
·
25 followers

Happy 21st Century!

3 Shares

Here's the shape of a 21st century I don't want to see. Unfortunately it looks like it's the one we're going to get, unless we're very lucky.

Shorter version is: there will be much dying: even more so than during the worst conflicts of the 20th century. But rather than conventional wars ("nation vs nation") it'll be "us vs them", where "us" and "them" will be defined by whichever dehumanized enemy your network filter bubble points you at—Orwell was ahead of the game with the Two Minute Hate, something with which all of us who use social media are now uncomfortably, intimately, familiar.

People will be die in large numbers, but it will happen out of sight. It'll be "soft genocide" or "malign neglect", and the victims will be the climate change refugees who are kept out of sight by virtual walls. On land there may be fences and minefields and debatable ground dominated by gangs, and at sea there may be drone-patrolled waters where refugees can be encouraged to sink and drown out of sight of the denizens of their destination countries. This much we already see. But the exterminatory policies will continue at home in the destination zones as well, and that's the new innovation that is gradually coming online. There will be no death camps in this shiny new extermination system. Rather, death by starvation and exposure will be inflicted by the operation of deliberately broken social security systems (see also universal credit), deportation of anyone who can be portrayed as an un-citizen (the Windrush scandal is an early prototype of this mechanism), and removal of the right to use money (via electronic fund transfers, once cash is phased out) from those deemed undesirable by an extrapolation of today's Hostile Environment Policy and its equivalents.

You don't need to build concentration camps with barbed wire fences and guards if you can turn your entire society into a machine-mediated panopticon with automated penalties for non-compliance.

The Nazis had to leave their offices in order to round people up and brutalize or murder them. They had to travel to the Wannsee Conference to hammer out how to implement Generalplan Ost. Tomorrow's genocides will be decentralized and algorithmically tweaked, quite possibly executed without human intervention.

Why?

The people who buy into the idea of eugenics and racial supremacy—the alt-right and their fellow travellers—will sooner or later have to come to terms with the inevitability of anthropogenic climate change. Right now climate denialism is a touchstone of the American right, but the evidence is almost impossible to argue against right now and it's increasingly obvious that many of the people who espouse disbelief are faking it—virtue signalling on the hard right. Sooner or later they'll flip. When they do so, they will inevitably come to the sincere, deeply held belief that culling the bottom 50% to 90% of the planetary population will give them a shot at survival in the post-greenhouse world. (That's the "bottom 50-90%" as defined by white supremacists and neo-Nazis.) They'll justify their cull using the values we're seeing field-tested today racism, religious and anti-religious bigotry, nationalism, sexism, xenophobia, white supremacism. These are values with tested, proven appeal to [petty authoritarians](https://theauthoritarians.org who feel that their way of life is under threat.

Of course there will, as time goes on, be fewer and fewer members of the murdering class, as climate insecurity causes periodic crop collapses, automation reduces the need for human labour is required to keep things running, and capital accumulation outstrips labour value accumulmation (leading to increased wealth concentration and societal stratification and rigidity).

Who are the murderers? I'll give you a clue: they're the current ruling class and their descendants. A while ago Bruce Sterling described the 21st century as "old people, living in cities, who are afraid of the sky". I'm calling it "wealthy white people, living in cities, who are afraid of the rising seas (and the refugees they'll bring)".

As for what this soft genocide will look, right here at home in Brexitland ...

Forget barbed wire, concentration camps, gas chambers and gallows, and Hugo Boss uniforms. That's the 20th century pattern of centralized, industrialized genocide. In the 21st century deep-learning mediated AI era, we have the tools to inflict agile, decentralized genocide as a cloud service on our victims.

Think in terms of old age homes where robots curate the isolated elderlies (no low-paid immigrant workers needed) and fail to identify their terminal medical conditions until they're too advanced to treat. People fed by vertical farms where solar/battery powered robots attend to the individual plants (thank you, Elon Musk's younger brother), food delivered by self-driving vehicles from lights-out warehouses, an end to high street shopping and restaurants and a phasing out of cash money.

Think in terms of a great and terrible simplification of our society that cleans out all the niches the underclass (which by then will include the struggling middle class) survive within.

Think in terms of policing by ubiquitous surveillance and social scoring and behavior monitoring. Think in terms of punishment by "community service"—picking up litter on starvation wages (and I mean, wages calculated to induce death through slow starvation), where if you fail to comply your ability to purchase the essentials of life using e-cash will simply stop working. Prisons where extensively drug-resistant TB runs rife as a discipline on the community service peons (as in: if you receive the sanction of an actual prison sentence, they won't need to execute you: 50% will be dead within 6 months).

There's no state censor in this regime. Just a filter bubble imposed through your social media and email contacts that downranks anything remotely subversive and gently punishes you if you express an unconvenient opinion or show signs of noticing what's missing—the way you don't see people with dark skins or foreign accents any more, for example. The corporate social media will of course comply with state requirements for a safe and secure internet—if they want to stay in business, that is.

We're getting a glimpse of the way this future is shaped, thanks to Trump and Brexit and, to a lesser extent, China today. Trump has discovered that in times of insecurity, the spectacle of cruelty provides a shared common focus for his supporters. This is nothing new: the Romans were there millennia ago with their festivities at the Coliseum.

What's new is the speed and specificity with which the cruelty can be applied, and the ability to redirect it in a matter of hours—increasing the sense of insecurity, which in turn drives social conservativism and support for violent self-defense.

There is a feedback loop in play. It may already be established globally. And it's going to kill billions of us.

Read the whole story
luizirber
21 hours ago
reply
Davis, CA
denubis
1 day ago
reply
Sydney, Australia
Share this story
Delete

An academic scientist goes to DevOps Days

1 Share

Last week I took a few days to attend DevOpsDays Silicon Valley. My goal was to learn a bit about how the DevOps culture works, what are the things people are excited about and discuss in this community. I’m also interested in learning a thing or two that could be brought back into the scientific / academic world. Here are a couple of thoughts from the experience.

tl;dr: DevOps is more about culture and team process than it is about technology, maybe science should be too…

What is DevOps anyway?

This one is going to be hard to define (though here’s one definition), as I’m new to the community as well. But, my take on this is that DevOps is a coming-together of what was once a bunch of different roles within companies. The process of releasing technology (or doing anything really) involves many different steps with different specializations needed at each step. The ‘old way’ of doing things involved teams that’d build prototypes, teams that would adapt those prototypes to a company’s infrastructure, teams that would maintain and service the “production” deployments, etc. The whole process was quite slow, partially because of the lack of communication between these very different kinds of groups.

Instead, DevOps attempts to encapsulate this entire process under one moniker. It is generally recognized that people do have different skills and roles, but they should be working together in a group to create, mature, and ship new code iteratively and as a single continuous process. DevOps is intently focused on “agile processes”, and values being quick and lightweight, focusing on metrics like “time between development and deployment.” This is only possible with a large focus on team dynamics and how the relationships between people with different skillsets and responsibilities should work with one another effectively. Perhaps unsurprisingly, this is the kind of thing that people talk about a lot at DevOps conferences (well, at least at the one I went to).

DevOpsDays was more about people than tech

More than anything else, what struck me was how little emphasis was paid on the technology itself. There are a billion moving parts in the cloud orchestration and container technology space, but they got relatively little discussion time at the conference (with the exception of Jennelle Crothers talking about how Microsoft was trying to make its Windows containers super lightweight, which was pretty neat). In general the only people consistently talking about the greatness of XXX new software/tool/etc were salespeople trying to get you to buy their product. Instead, the vast majority of conversations, discussions, brainstorms, etc were about people and process, not technology per-se.

Obviously, it’s difficult to disentangle the tech from the people when you’re in the tech industry, but it was illustrative to see the relative focus that got placed on the “squishier” questions. For example, a few that came up pretty frequently:

  • How can we disseminate information across a distributed team most effectively?
  • How can we create a team culture that welcomes newcomers?
  • How can we avoid alienating members of the team?
  • How can we avoid single points of failure in the team?
  • How can we do things faster, more efficiently, and more reliably as a team?

It’s no coincidence that the word “team” was in each of the bullet points above.

Scientific DevOps?

As a member of the scientific / academic community, this is quite interesting to me. A whole conference where people talk about creating positive culture and effective yeams? Yes please. However, I realize that the incentives and problems associated with DevOps are not the same as those faced by scientists. For example, a big part of DevOps (and SREs more generally) is ensuring that services, tools, and sites are reliable and stable over time. You can design tech around this idea all you want, but at some point you’ll need a team of people to manage that tech. The DevOps world has seemed to realize that this means the social dynamics of that team are just as important as the technology itself, which is a breath of fresh air.

I wonder what scientific DevOps would look like. Scientists are theoretically also operating in team-based environments (at least, the ones in scientific labs). The incentives of reward and recognition are totally misaligned, but it’s still the case that successful teams produce more effective work in general. Perhaps it’s worth exploring how the DevOps take on operations and team dynamics ports to the academic scientific community.

Open and (relatively) diverse culture

A final point that I noticed was that, relative to other tech conferences I’ve attended, this one had a general air of positivity and open culture. There were all kinds of people there, and while the general makeup of attendees definitely still had a lot of white dudes in it, the room nonetheless never felt like it was dominated by this group of people. There was also a great culture of supporting people as they were giving talks - some of the speakers were clearly more nervous than others, and the audience did a good job of trying to disarm their anxiety. Perhaps this is the kind of culture that comes with a profession that depends on (and focuses on) team dynamics and culture.

Ultimately, as with any good conference, I left having more questions than answers. How can scientists improve their own team dynamics using principles from the DevOps community? How can the open-source community do the same but for distributed team workloads and responsibility? Where is the balance between “solve this with tech” and “solve this with people”? How can we encourage more cross-talk between the world of scientific research and the world of tech? Either way, it was an interesting and informative experience, and I’m looking forward to learning more about this community.

Highlights and takeaways

  • Amy Nguyen and Adam Barber shared their strategies for taking a data-driven approach to user interface / experience design.
    • Always collect data from people you design things for.
    • Put in the legwork to interview and gather diverse perspectives before you build tools.
  • Fatema Boxwala (@fatty_box) shared her experience as an intern, and gave some pointers for how to create a welcoming and productive environment for intern positions.
    • Don’t forget that interns have lives too. If they just moved to a new city, help them settle in.
    • Align your intern’s project with something a team member (or yourself) will be actively work on.
  • Katy Farmer (@TheKaterTot) reminded everybody that teams shouldn’t automatically aspire to use the workflows that gigantic tech companies use.
    • Just because it works for a big tech company doesn’t mean it’ll work for you.
    • Being a smaller-sized company isn’t better or worse, it’s just different. Don’t treat company size as a reflection of quality, and don’t assume larger companies do operations more effectively.
  • Frances Hocutt reassured everybody that it’s OK if your tests wouldn’t satisfy a production-level standard all the time.
    • Make sure your tests reflect the current state of your code.
    • Don’t let perfect be an enemy of good. Testing 10% of the code is still better than testing 0%.
  • Jennelle Crothers (@jkc137) described how Windows is trying to shrink images that run Windows so that you can run them in containers more easily.
    • It turns out that shrinking something from several GB to a few hundred MB makes it much easier to ship around :-)
    • As an aside, it’s fascinating to see Microsoft focus on integrating itself with the container ecosystem (as opposed to trying to replace or compete with it). Maybe they really have learned something from their “linux is a cancer” debacle.
  • Adrian Cockcroft (@adrianco) explained the importance of creating a culture of reporting and logging incidents, even the small ones!
    • Instrument and study the “non-events” - look for “near-misses” and outliers. Never throw away information just because something catastrophic didn’t happen.
    • Plan and practice for chaos! What will your team do if “everything goes wrong”?
Read the whole story
luizirber
1 day ago
reply
Davis, CA
Share this story
Delete

Thoughts on retiring from a team

1 Share

Thoughts on retiring from a team

The Rust Community Team has recently been having a conversation about what a team member’s “retirement” can or should look like. I used to be quite active on the team but now find myself without the time to contribute much, so I’m helping pioneer the “retirement” process. I’ve been talking with our subteam lead extensively about how to best do this, in a way that sets the right expectations and keeps the team membership experience great for everyone.

Nota bene: This post talks about feelings and opinions. They are mine and not meant to represent anybody else’s.

Why join a team?

When I joined the Rust community subteam, its purpose was defined vaguely. It was a small group of “people who do community stuff”, and needed all the extra hands it could get. A lot of my time was devoted explicitly to Rust and Rust-related tasks. The tasks that I was doing anyways seemed closely aligned with the community team’s work, so stepping up as a team contributor made a lot of sense. Additionally, the team was so new that the only real story for “how to work with this team and contribute to its work” was “join the team” . We hadn’t yet pioneered the subteams and collaboration with community organizers outside the official community team which are now multiplying the team’s impact.

Why leave?

I’m grateful to the people who have the bandwidth and interest to put consistent work into participating on Rust’s community team today. As the team has grown and matured, its role has transitioned from “do community tasks” to “support and coordinate the many people doing those tasks”. I neither enjoy nor excel at such coordination tasks. Not only do I have less time to devote to Rust stuff, but the community team’s work has naturally grown into higher-impact categories that I personally find less fulfilling and more exhausting to work on.

Teams and people change

In a way, the team’s growth and refinement over the years reminds me of a microcosm of what I saw while working at a former startup as it built up into an enterprise company. Some peoples’ working style had been excellently suited to the 5-person company they originally joined, but clashed with the 50-person company into which that startup grew. Others who would never have thrived in a company of only 10 people were hiring on and having a fantastic impact scaling the company up to 1,000. And some were fine when the company was small and didn’t mind being part of a larger organization either. That experience reminds me that the fit between a person and organization at some point in the past does not guarantee that they’ll remain a good fit for each other over time, and neither is necessarily to blame for the eventual mismatch as both grow and change.

Does leaving harm anyone?

When you’re appreciated and valued for the work you do on a team, it’s easy to get the idea that the team would be harmed if you left. The tyres on my bike are a Very Important Part of the bike, and if I took them off, the bike wouldn’t be rideable. But a team isn’t just a machine – a team’s impact is an emergent phenomenon that comes out of many factors, not a static item. If a sports team has a really excellent coach, they’ll retain the lessons they learned from that coach’s mentorship even after the coach moves away. Older players will pass along the coach’s lessons to younger ones, and their ideas will stick around and improve the group even long after the original players’ retirement. When a team is coordinated well, one member leaving doesn’t hurt it. And if I leave on good terms rather than sticking around till I burn out or burn bridges, I can always be available for remaining members to consult when if need advice that only I can provide.

Would staying harm anyone?

I think that in the case of the Rust community team, it would reflect poorly on the community as a whole if the exact same people comprised the community team for the entire life of the language.

If nobody new ever joins the team, we wouldn’t get new ideas and tactics, nor the priceless infusion of fresh patience and optimism that new team members bring to our perennial challenges and frustrations. So, new team members are essential. If new people joined on a regular basis but nobody ever left, the team would grow unboundedly large as time went on, and have you ever tried to get anything done with a hundred- or thousand-person committee? In my opinion, having established team members retire every now and then is an essential factor in preventing either of those undesirable hypotheticals.

The team selects for members who’ll step up and accomplish tasks when they need to. I think establishing turnover in a healthy and sustainable way is one of the most essential tasks for the team to build its skills at. The best way to get a healthy amount of turnover – not too much, but not too little either – is for every team member to step up to the personal challenge of identifying the best time to retire from active involvement. And for me, that happens to look like right now.

Aspirational Clutter

Do you have stuff in your house that you don’t use, and it’s taking up space, and you’re kind of annoyed at it for taking up space, but you don’t feel like you can get rid of it because you think you really should use it, or you’re sure you’re just going to make some personal change that will cause you to use it someday? I call that stuff aspirational clutter: It doesn’t belong to you, it belongs to some imaginary person who doesn’t exist but you aspire to become them someday.

A team meeting every week on your agenda can be aspirational clutter in the same way as a jumbled shelf of planners or a pile of sports gear covering a treadmill: It not only isn’t a good fit for who you are right now, but by wasting time or space it actually gets in the way of the habits and changes that would make you more like that person you aspire to be.

I find few experiences more existentially miserable than feeling obliged to promise work that I know I’ll lack the resources of time or energy to deliver. Sticking around on a team that I’m no longer a good fit for puts me in a situation where get to choose between feeling guilty if I don’t promise to get any work done, or feeling like a disappointment for letting others down if I commit to more than I’m able to deliver. Those aren’t feelings I want to experience, and I can avoid them easily by being honest with myself about the amount of time and energy I have available to commit to the team.

The benefits of contributing from a non-team-member role

One scary idea that comes up when leaving a team is the question: “if I’m not on the team, how can I help with the team’s work?”.

In my opinion, it builds a healthier community if people who are good at a given team’s work spend some time interfacing with the team from the perspective of non-team-members. If I know how the community team gets stuff done and I go “undercover” as a non-team-member coming to them for help, I can give them essential feedback to improve the experience and processes that non-team-members encounter.

When I wear my non-team-member hat and try to get stuff done, I learn what it’s like for everyone else who tries to interface with the team. I can then use the skills that I built on by participating on the team to remedy any challenges that a non-team-member encounters. Those changes create a better experience for every community member who interacts with the team afterwards.

What next?

As a community team alum, I’ll keep doing the Rust outreach – the meetup organizing, the conference talks, the cute swag, the stickers – that I’ve been doing all along. Stepping down from the official team member list just formalizes the state that my involvement has been in for the past year or so: Although I get the community team’s support for my endeavors when I need it, I’m not invested in the challenges of supporting others’ work which the team is now tackling.

I’m proud of the impact that the team has had while I’ve been a part of it, and I look forward to seeing what it will continue to accomplish. I’m grateful for all the leadership and hard work that have gone into making the Rust community subteam an organization from which I can step back while remaining confident that it will keep excelling and evolving.

Why blog all that?

I’m publishing my thoughts on leaving in the hopes that they can help you, dear reader, gain some perspective on your own commitments and curate them in whatever way is best for you.

If you read this and feel pressured to leave something you love and find fulfilling, please try to forget you ever saw in this post.

If you read this hoping it would give you some excuse to quit a burdensome commitment and feel disappointed that I didn’t provide one, here it is now: You don’t need a fancy eloquent excuse to stop doing something if you don’t want to any more. Replace unfulfilling pursuits with better ones.

Read the whole story
luizirber
2 days ago
reply
Davis, CA
Share this story
Delete

Program Synthesis is Possible

1 Share

Program synthesis is not only a hip session title at programming languages conferences. It’s also a broadly applicable technique that people from many walks of computer-science life can use. But it can seem like magic: automatically generating programs from specifications sounds like it might require a PhD in formal methods. Wisconsin’s Aws Albarghouthi wrote a wonderful primer on synthesis last year that helps demystify the basic techniques with example code. Here, we’ll expand on Aws’s primer and build a tiny but complete-ish synthesis engine from scratch.

You can follow along with my Python code or start from an empty buffer.

Z3 is Amazing

We won’t quite start from scratch—we’ll start with Z3 and its Python bindings. Z3 is a satisfiability modulo theories (SMT) solver, which is like a SAT solver with theories that let you express constraints involving integers, bit vectors, floating point numbers, and what have you. We’ll use Z3’s Python bindings. On a Mac, you can install everything from Homebrew:

$ brew install z3 --with-python

Let’s try it out:

import z3

To use Z3, we’ll write a logical formula over some variables and then solve them to get a model, which is a valuation of the variables that makes the formula true. Here’s one formula, for example:

formula = (z3.Int('x') / 7 == 6)

The z3.Int call introduces a Z3 variable. Running this line of Python doesn’t actually do any division or equality checking; instead, the Z3 library overloads Python’s / and == operators on its variables to produce a proposition. So formula here is a logical proposition of one free integer variable, $x$, that says that $x \div 7 = 6$.

Let’s solve formula. We’ll use a little function called solve to invoke Z3:

def solve(phi):
    s = z3.Solver()
    s.add(phi)
    s.check()
    return s.model()

Z3’s solver interface is much more powerful than what we’re doing here, but this is all we’ll need to get the model for a single problem:

print(solve(formula))

On my machine, I get:

[x = 43]

which is admittedly a little disappointing, but at least it’s true: using integer division, $43 \div 7 = 6$.

Z3 also has a theory of bit vectors, as opposed to unbounded integers, which supports shifting and whatnot:

y = z3.BitVec('y', 8)
print(solve(y << 3 == 40))

There are even logical quantifiers:

z = z3.Int('z')
n = z3.Int('n')
print(solve(z3.ForAll([z], z * n == z)))

Truly, Z3 is amazing. But we haven’t quite synthesized a program yet.

Sketching

In the Sketch spirit, we’ll start by synthesizing holes to make programs equivalent. Here’s the scenario: you have a slow version of a program you’re happy with; that’s your specification. You can sort of imagine how to write a faster version, but a few of the hard parts elude you. The synthesis engine’s job will be fill in those details so that the two programs are equivalent on every input.

Take Aws’s little example: you have the “slow” expression x * 2, and you know that there’s a “faster” version to be had that can be written x << ?? for some value of ??. Let’s ask Z3 what to write there:

x = z3.BitVec('x', 8)
slow_expr = x * 2
h = z3.BitVec('h', 8)  # The hole, a.k.a. ??
fast_expr = x << h
goal = z3.ForAll([x], slow_expr == fast_expr)
print(solve(goal))

Nice! We get the model [h = 1], which tells us that the two programs produce the same result for every byte x when we left-shift by 1. That’s (a very simple case of) synthesis: we’ve generated a (subexpression of a) program that meets our specification.

Without a proper programming language, however, it doesn’t feel much like generating programs. We’ll fix that next.

A Tiny Language

Let’s conjure a programming language. We’ll need a parser; I choose Lark. Here’s my Lark grammar for a little language of arithmetic expressions, which I ripped off from the Lark examples and which I offer to you now for no charge:

GRAMMAR = """
?start: sum

?sum: term
  | sum "+" term        -> add
  | sum "-" term        -> sub

?term: item
  | term "*"  item      -> mul
  | term "/"  item      -> div
  | term ">>" item      -> shr
  | term "<<" item      -> shl

?item: NUMBER           -> num
  | "-" item            -> neg
  | CNAME               -> var
  | "(" start ")"

%import common.NUMBER
%import common.WS
%import common.CNAME
%ignore WS
""".strip()

You can write arithmetic and shift operations on literal numbers and variables. And there are parentheses! Lark parsers are easy to use:

import lark
parser = lark.Lark(GRAMMAR)
tree = parser.parse("(5 * (3 << x)) + y - 1")

As for any good language, you’ll want an interpreter. Here’s one that processes Lark parse trees and takes a function in as an argument to look up variables by their names:

def interp(tree, lookup):
    op = tree.data
    if op in ('add', 'sub', 'mul', 'div', 'shl', 'shr'):
        lhs = interp(tree.children[0], lookup)
        rhs = interp(tree.children[1], lookup)
        if op == 'add':
            return lhs + rhs
        elif op == 'sub':
            return lhs - rhs
        elif op == 'mul':
            return lhs * rhs
        elif op == 'div':
            return lhs / rhs
        elif op == 'shl':
            return lhs << rhs
        elif op == 'shr':
            return lhs >> rhs
    elif op == 'neg':
        sub = interp(tree.children[0], lookup)
        return -sub
    elif op == 'num':
        return int(tree.children[0])
    elif op == 'var':
        return lookup(tree.children[0])

As everybody already knows from their time in CS 6110, your interpreter is just an embodiment of your language’s big-step operational semantics. It works:

env = {'x': 2, 'y': -17}
answer = interp(tree, lambda v: env[v])

Nifty, but there’s no magic here yet. Let’s add the magic.

From Interpreter to Constraint Generator

The key ingredient we’ll need is a translation from our source programs into Z3 constraint systems. Instead of computing actual numbers, we want to produce equivalent formulas. For this, Z3’s operator overloading is the raddest thing:

formula = interp(tree, lambda v: z3.BitVec(v, 8))

Incredibly, we get to reuse our interpreter as a constraint generator by just swapping out the variable-lookup function. Every Python + becomes a plus-constraint-generator, etc. In general, we’d want to convince ourselves of the adequacy of our translation, but reusing our interpreter code makes this particularly easy to believe. This similarity between interpreters and synthesizers is a big deal: it’s an insight that Emina Torlak’s Rosette exploits with great aplomb.

Finishing Synthesis

With formulas in hand, we’re almost there. Remember that we want to synthesize values for holes to make two programs equivalent, so we’ll need two Z3 expressions that share variables. I wrapped up an enhanced version of the constraint generator above in a function that also produces the variables involved:

expr1, vars1 = z3_expr(tree1)
expr2, vars2 = z3_expr(tree2, vars1)

And here’s my hack for allowing holes without changing the grammar: any variable name that starts with an “H” is a hole. So we can filter out the plain, non-hole variables:

plain_vars = {k: v for k, v in vars1.items()
              if not k.startswith('h')}

All we need now is a quantifier over equality:

goal = z3.ForAll(
    list(plain_vars.values()),  # For every valuation of variables...
    expr1 == expr2,  # ...the two expressions produce equal results.
)

Running solve(goal) gets a valuation for each hole. In my complete example, I’ve added some scaffolding to load programs from files and to pretty-print the expression with the holes substituted for their values. It expects two programs, the spec and the hole-ridden sketch, on two lines:

$ cat sketches/s2.txt
x * 10
x << h1 + x << h2

It absolutely works:

$ python3 ex2.py < sketches/s2.txt
x * 10
(x << 3) + (x << 1)

Better Holes with Conditions

Our example so far can only synthesize constants, which is nice but unsatisfying. What if we want to synthesize a shifty equivalent to x * 9, for example? We might think of a sketch like x << ?? + ??, but there is no pair of literal numbers we can put into those holes to make it equivalent. How can we synthesize a wider variety of expressions, like x?

We can get this to work without fundamentally changing our synthesis strategy. We will, however, need to add conditions to our language. We’ll need to extend the parser with a ternary operator:

?start: sum
  | sum "?" sum ":" sum -> if

And I’ll add a very suspicious-looking case to our interpreter:

elif op == 'if':
    cond = interp(tree.children[0], lookup)
    true = interp(tree.children[1], lookup)
    false = interp(tree.children[2], lookup)
    return (cond != 0) * true + (cond == 0) * false

These funky multiplications are just a terrible alternative to Python’s built-in condition expression. I’m using this here instead of a straightforward true if cond else false because this works in both interpreter mode and in Z3 constraint generation mode and behaves the same way. I apologize for the illegible code but not for the convenience of a single implementation.

The trick is to use conditional holes to switch between expression forms. Here’s an implementation of the sketch we want above:

$ cat sketches/s4.txt
x * 9
x << (hb1 ? x : hn1) + (hb2 ? x : hn2)

Each ternary operator here represents a ?? hole in the sketch we wanted to write, x << ?? + ??. By choosing the values of hb1 and hb2, the synthesizer can choose whether to use a literal or a variable in that place. In a proper synthesis system, we’d hide these conditions from the surface syntax—the constraint generator would insert a condition for every ??. By conditionally switching between a wider variety of syntax forms—even using nested conditions for nested expressions—the tool can synthesize complex program fragments in each hole.

Keep Synthesizing

It may be a toy language, but we’ve built a synthesizer! Program synthesis is a powerful idea that can come in handy in far-flung domains of computer science. To learn more about the hard parts that come next, I recommend James Bornholt’s overview. And you must check out Rosette, a tool that gives you the scaffolding to write synthesis-based tools without interacting with an SMT solver directly as we did here.

Read the whole story
luizirber
7 days ago
reply
Davis, CA
Share this story
Delete

Software as an academic publication

1 Share

Software has for while now played a weird an uncomfortable role in the academic statistics world. When I first started out (circa 2000), I think developing software was considered “nice”, but for the most part, was not considered valuable as an academic contribution in the statistical universe. People were generally happy to use the software and extol its virtues, but when it came to evaluating a person’s scholarship, software usually ranked somewhere near the bottom of the list, after papers, grants, and maybe even JSM contributed talks.

Journals like the Journal of Statistical Software tried to remedy the situation by translating the contribution of software into a commodity that people already understood: papers. (Okay, thinking of papers as “commodities” is probaby worth another blog post.) The idea with JSS, as I understood, was that you could publish a paper about your software, or maybe even a manual, and then that would be peer-reviewed in the usual way. If it passed peer-review, it would be published (electronically) and then you could list that paper on your CV. Magic! The approach was inefficient from the start, because it placed an additional burden on software developers. Not only did they have to write (and support) the software, but they had to write a separate paper to go along with it. Writers of theorems were not saddled with such requirements. But at the time of its founding around 2004, the JSS approach was perhaps a reasonable compromise.

These days, I think many people would be comfortable seeing a piece of software as a per se academic contribution. A software package can, for the most part, stand alone and people will recognize it. I personally do not think authors should be forced to write a separate paper, essentially translating the software from code into English or whatever natural language. The Journal of Open Source Software is a move in this “opposite” direction, where no paper is required, beyond the software itself and a brief summary of the software’s purpose and how to use it. This process is similar to ideas that people have had regarding the publishing of data: why not just let people who collect data publish the dataset alone and get credit for doing so? I think the JOSS approach is basically the way to go for academic software contributions, with one small caveat.

Evaluation of most software places a heavy emphasis on usefulness. The instructions for authors on the JSS web site indicate how each paper and software would be evaluated for publication:

The review has two parts: both the software and the manuscript are reviewed. The software should work as indicated, be clearly documented, and serve a useful purpose. Reviewers are instructed to evaluate both correctness and usefulness. Special emphasis is given on the reproducibility of the results presented in the submission.

JOSS takes a slightly different approach but ultimately has a similar standard. The software

Should be a significant contribution to the available open source software that either enables some new research challenges to be addressed or makes addressing research challenges significantly better (e.g., faster, easier, simpler)

In both cases, there is an emphasis on usefulness which, as a goal, is difficult to argue against. However, if you look at the goals of most academic journals, usefulness is not amongst the criteria for evaluating contributions. If it were, you’d have to delete most of the journals in existence today. This is a frequent argument that basic scientists and theorists have with policymakers—whether their work is “useful” is not the point. Advancing the state of knowledge about the world is important in and of itself and the usefulness of that knowledge may be difficult to ascertain in the short run. Journals typically strive to publish papers that represent an advance in knowledge in a particular field. Often there is a vague mention of “impact” of the work, but how wide that impact is likely to be will depend on the nature of the field.

What exactly is the advance in knowledge that is obtained when software is developed and distributed? I don’t ask this question because I don’t think there is an advance. I ask it because I think there is definitely knowledge that is gained in the process of developing software, but it’s usually not communicated to anyone. This knowledge is typically not obtained via the scientific process. Rather it is often gained through personal experience. We have well-established methods for distributing software once it is developed, but we do not have well-established venues of distributing any knowledge that is obtained, particularly any informal knowledge or personal stories.

If you’ve ever seen someone give a presentation or talk about some software they’ve written, you know it’s not the same as reading the manual for the software. One reason why is that the presenter will describe the process through which they went about writing the software and, usually as asides, mention some lessons learned along the way. I think these “lessons learned” are critically important, and make up a relevant contribution to the scientific community. If I mention that I started out writing this software using R’s S4 class/method system but realized it was too complicated and annoying and so went back to S3, that’s a useful lesson learned. As a fellow developer, I might reconsider starting my next project with S4. However, unless we take a critical eye to the git logs for a given software package, we would never know this by simply using the software. It would seem as if the developer went with S3 from the get go for unknown reasons.

I think it would be nice if software publications included a brief summary of any lessons learned in the process of developing the software. Many developers already do this via blog posts or similar media. But many do not and we are left to wonder. Maybe they did some informal user testing and found that people preferred one interface over another interface. Anything that other people (and developers) might find useful, beyond the software itself. It might be a paragraph or even just a set of bullet points. It might be more but I’m not inclined to require any specific length or format. To be clear, this is not equivalent to a scientific study, but it’s potentially useful information nonetheless.

One good example of something like this is the original 1996 R publication in the Journal of Computational and Graphical Statistics by Robert Gentleman and Ross Ihaka. In fact, the abstract for the article says it all:

In this article we discuss our experience designing and implementing a statistical computing language. In developing this new language, we sought to combine what we felt were useful features from two existing computer languages. We feel that the new language provides advantages in the areas of portability, computational efficiency, memory management, and scoping.

With many popular software packages there are often blog posts that people write to describe how they use the software for their particular purpose. These posts will sometimes praise or criticize the software and in aggregate provide a good sense of the user experience. The kind of information I’m talking about gives us insight into the developer experience. For any reasonably well-developed piece of software, there are always some lessons learned that the author ultimately keeps to themselves. I think that’s a shame and it would be nice if we could all learn something from their experience.

Read the whole story
luizirber
7 days ago
reply
Davis, CA
Share this story
Delete

Meetings

1 Share

This thread on Twitter sparked a lot of interest, so I hope it’s useful if I publish the whole section on meetings from the upcoming revision of How to Teach Programming (and Other Things).


Most people are really bad at meetings: they don’t have an agenda going in, they don’t take minutes, they waffle on or wander off into irrelevancies, they repeat what others have said or recite banalities simply so that they’ll have said something, and they hold side conversations (which pretty much guarantees that the meeting will be a waste of time). Knowing how to run a meeting efficiently is a core skill for anyone who wants to get things done. (Knowing how to take part in someone else’s meeting is just as important, but gets far less attention—as a colleague once said, everyone offers leadership training, nobody offers followership training.) The most important rules for making meetings efficient are not secret, but are rarely followed:

  • Decide if there actually needs to be a meeting. If the only purpose is to share information, have everyone send a brief email instead. Remember, you can read faster than anyone can speak: if someone has facts for the rest of the team to absorb, the most polite way to communicate them is to type them in.

  • Write an agenda. If nobody cares enough about the meeting to write a point-form list of what’s supposed to be discussed, the meeting itself probably doesn’t need to happen.

  • Include timings in the agenda. Agendas also help you keep the meeting moving (as in, “That’s very interesting, but we have five other topics to get through in the next fifteen minutes, so could you please make your point?”), so include the time to be spent on each item in the agenda. Your first estimates with any new group will be wildly optimistic, so revise them upward for subsequent meetings.

  • Prioritize. Every meeting is a micro-project, so work should be prioritized in the same way that it is for other projects: things that will have high impact but take little time should be done first, and things that will take lots of time but have little impact should be skipped.

  • Put someone in charge. “In charge” means keeping the meeting moving, glaring at people who are muttering to one another or checking email, and telling people who are talking too much to get to the point. It does not mean doing all the talking; in fact, whoever is in charge will usually talk less than anyone else, just as a referee usually kicks the ball less often than the players.

  • Require politeness. No one gets to be rude, no one gets to ramble, and if someone goes off topic, it’s the chair’s job to say, “Let’s discuss that elsewhere.”

  • No technology. Insist that everyone put their phones, tablets, and laptops into politeness mode (i.e., closes them). If this is too stressful, let participants hang on to their electronic pacifiers but turn off the network so that they really are using them just to take notes or check the agenda. Be sure to make it clear that assistive technology is exempt from this rule.

  • No interruptions. Participants should raise a finger, put up a sticky note, or make one of the other gestures people make at high-priced auctions instead if they want to speak next. If the speaker doesn’t notice you, the person in charge ought to.

  • Record minutes. Someone other than the chair should take point-form notes about the most important pieces of information that were shared, and about every decision that was made or every task that was assigned to someone.

  • Take notes. While other people are talking, participants should take notes of questions they want to ask or points they want to make. (You’ll be surprised how smart it makes you look when it’s your turn to speak.)

  • End early. If your meeting is scheduled for 10:00-11:00, you should aim to end at 10:55 to give people time to get where they need to go next.

As soon as the meeting is over, the minutes should be circulated (e.g., emailed to everyone or posted to a wiki):

  • People who weren’t at the meeting can keep track of what’s going on. You and your fellow students all have to juggle assignments from several other courses while doing this project, which means that sometimes you won’t be able to make it to team meetings. A wiki page, email message, or blog entry is a much more efficient way to catch up after a missed meeting or two than asking a team mate, “Hey, what did I miss?”

  • Everyone can check what was actually said or promised. More than once, I’ve looked over the minutes of a meeting I was in and thought, “Did I say that?” or, “Wait a minute, I didn’t promise to have it ready then!” Accidentally or not, people will often remember things differently; writing it down gives team members a chance to correct mistaken or malicious interpretations, which can save a lot of anguish later on.

  • People can be held accountable at subsequent meetings. There’s no point making lists of questions and action items if you don’t follow up on them later. If you’re using a ticketing system, the best thing to do is to create a ticket for each new question or task right after the meeting, and update those that are being carried forward. That way, your agenda for the next meeting can start by rattling through a list of tickets.

This isn’t the only way to run a meeting: different cultures have different conventions, and as the group becomes larger or more adversarial, you’ll need something more formal like Robert’s Rules of Order. But if there are a dozen people or less, and the session is only supposed to be an hour long, the outline above will get you a long way.

However…

Some people are so used to the sound of their own voice that they will insist on talking half the time no matter how many other people are in the room. One way to combat this is to give everyone three sticky notes at the start of the meeting. Every time they speak, they have to take down one sticky note. When they’re out of notes, they aren’t allowed to speak until everyone has used at least one, at which point everyone gets all of their sticky notes back. This ensures that nobody talks more than three times as often as the quietest person in the meeting, and completely changes the dynamics of most groups: people who have given up trying to be heard because they always get trampled suddenly have space to contribute, and the overly-frequent speakers quickly realize just how unfair they have been.

Another useful technique is called interruption bingo. Draw a grid, and label the rows and columns with the participants’ names. Each time someone interrupts someone else, add a tally mark to the appropriate cell. Halfway through the meeting, take a moment to look at the results. In most cases, you will see that one or two people are doing all of the interrupting, often without being aware of it. After that, saying, “All right, I’m adding another tally to the bingo card,” is often enough to get them to throttle back.

Read the whole story
luizirber
7 days ago
reply
Davis, CA
Share this story
Delete
Next Page of Stories