Friday, July 27, 2018

Testing Encryption - 3 years of Dan Boneh's Online Cryptography Course

Three years ago in July, I completed Dan Boneh's online cryptography course with distinction through Coursera's Cryptography 1.  Since then, I've had the opportunity to use and test cryptographic systems at work and for hobbies.  Here are a few lessons learned when testing encryption.

I have found my fair share of bugs in the crypto we chose to use at work.  I've gotten into a routine when testing encryption used for message authentication:
  • Test the same plaintext multiple times.  Does it need to be different each time?  How much of the MAC is different each time?  It might help to explore the data your hashing function spits out as it can tell you how your hash function does what it does.
  • Replay it.  How can a user abuse identical MAC'd data if they replay it at a later date?  For a different user?  Can you add items to the plaintext that will allow you to validate not only the data but the source or timeframe as well?
  • Ensure your hashes are detecting changes. Is your MAC rejected if you change the data at various places within the message?
  • Rotate the key. Do you need a hash to survive a key change?  Usually you can just regenerate the data and re-MAC it, so figure out if you really need to use MACs over long lifetimes.  They're easy to compute.
  • Generate a bunch at once.  Is performance an issue with the service?  Most hashes are built for speed, but is yours?
For each of these failure modes, I'm looking mostly for hints of weakness.  I'm expecting pseudo-random noise, but how does my brain distinguish that from almost random noise?

There are many times when you need to generate a unique but random value but don't have the space to use a GUID.  To evaluate if a solution will be "unique enough", check out the Birthday problem wikipedia page, and this table of probabilities in particular.  Find out how many possible values exist (9 numeric digits = 10^9 ~= 2^30).  Compare on the table with that value as the hash space size versus the number of times you'll be setting this value.  This will tell you if the algorithm you want to use is sufficient.  If you are making long-term IDs that can only be created once, you obviously  want the probability of collision to be extremely low.  If you can recover from a collision by creating a new transaction fairly readily, you might not need as much assurance.  Ive used this to help drive a decision to increase unique token size from 13 to 40 characters, guide switching from SQL auto-numbers to random digits to hide transaction volumes, and ensure internal transaction IDs are unique enough to guide troubleshooting and reporting.

Time and again, the past three years have taught me that cryptography must be easy for it to be used widely.  I've stayed with Signal for text messaging because it just works.  I can invite friends and not be embarrassed at its user interface.  It doesn't tick all the boxes (anonymity is an issue being a centralized solution), but it has enough features to be useful and few shortcomings.  This is the key to widespread adoption of encryption for securing communications.  Since Snowden revealed the extent of the NSA's data collection capability, sites everywhere have switched on HTTPS through Let's Encrypt. Learning more about each implementation of SSH and TLS in the course was both informative and daunting. I was anxious to get HTTPS enabled without rehosting the site on my own.  Early 2018, Blogger added the ability to do just that through Let's Encrypt.  It requires zero configuration once I toggle it on.  I can't sing its praises enough.  The content of this blog isn't exactly revolutionary, but this little move toward a private and authentic web helps us all.

Dan Boneh's Cryptography course continues to inform my testing.  The core lesson still applies: "Never roll your own cryptography."  And the second is how fragile these constructs are.  Randomness is only random enough given the time constraints.  Secure is only secure enough for this defined application.  Every proof in the course is only as good as our understanding of the math, and every implementation is vulnerable at the hardware, software, and user layers.  In spite of this, it continues to work because we test it and prove it hasn't broken yet.  I'm looking forward to another three years of picking it apart.

Wednesday, July 25, 2018

Wristband Teardown from Amazon's #FireTVSDCC Event at San Diego Comic Con

A friend returned from San Diego Comic Con 2018 with an RFID bracelet used to track users in the Amazon Fire TV experience (on Twitter, #FireTVSDCC).  This is a teardown of the bracelet after the event.  At this time, I was unable to read from the bracelet.



The bracelet is fairly simple with a cloth band and plastic/paper tab threaded through.  The closure is plastic and one-way.  It bites into and mangles the cloth band if you attempt to remove, but you could probably shim it with tools and practice.  Might be a fun thing for the Tamper Evident Village if it turned out events were trying to use this for access control like plastic self-destructing wristbands.


The back contains a serial number.  I would like to see if this serial number would match the data read off the tag.



Separating the badge by prying them apart, I  spot the prize: an adhesive RFID tag placed between the glossy plastic covers.  It appears to have a model number of "CXJ-040" in the center of the tag.  It uses a circular antenna.  CXJ is the initials of Shenzen manufacturer ChuangxinjiaTheir product pages show many similar wristbands in a few different frequencies.

The tag didn't respond to my Android phone, so it is not a Mifare or similar.  Hopefully I can find a reader at the local Hackerspace or DEF CON 26.

Monday, July 16, 2018

AES CBC Encryption on OpenVMS

It turns out that using the convenience function ENCRYPT$ENCRYPT_ONE_RECORD on OpenVMS for AES CBC does something squirrelly.  It will not let you pass in an IV.  Instead, ENCRYPT$INIT sets the IV to 16 bytes of \0.  In order to ensure your identical plaintext first blocks aren't identical after encryption, your first 16 byte block needs to be your true IV.  This will get XORd with the null byte encrypted IV and sufficiently scrambled so your second block can be uniquely encrypted.  OR just do it right and avoid the convenience method.  Use ENCRYPT$INIT properly.

Ref: http://h41379.www4.hpe.com/doc/83final/4493/4493pro_025.html#encrypt_one_routine

"For AES, the optional P1 argument for the AES IV initialization vector is a reference to a 16-byte (2 quadword) value.
If you omit this argument, the initialization vector used is the residue of the previous use of the specified context block. ENCRYPT$INIT initializes the context block with an initialization vector of zero."

Tuesday, June 26, 2018

Interacting with OpenVMS on Mac through Terminal.app and iterm2

Terminal.app and iterm2 can be used on Mac interact with an OpenVMS system by setting the right profile settings and keys with escape sequences. Most keys work by default (PF1-PF4 when Fn is used to disable Mac system interactions), but it must be configured correctly to allow access to other keys commonly used in OpenVMS terminal applications (FIND, PREV, NEXT).

PF1-PF4

The F1 - F4 function keys will work as PF1-PF4 if Fn is pressed as well.
  • If your keyboard is a large Mac keyboard with a Fn key above the arrows, access PF1 - PF4 by turing off the Mac keyboard options (brightness, volume controls, etc) by holding the Fn key and pressing F1 - F4. This also works for some other function keys. Smaller keyboards will need to map F13+. 
  • If you are on a PC keyboard, you can disable the Function Keys functions in System Preferences and return them to act as F1 - F4. 
  • If you don't want your function keys to always act as F1 - F4, the program FunctionFlip can be used to change your function keys back and forth on the fly. 

Accessing Keys with Shift and Alt

Some keys are mapped, but not accessible without using Shift and Alt in combination with the above Fn key/FunctionFlip.

Here are Terminal.app configs:
  • F11: Alt F6 
  • F12: Alt F7 
  • HELP: F15 on an extended keyboard or Shift F7 
  • DO: F16 on an extended keyboard or Shift F8 or Alt F11 
  • F17: Shift F9 or Alt F12 or F17 on an extended keyboard 
  • F18: Shift F10 or F18 on an extended keyboard 
  • F19: Alt F14 on an extended keyboard or F19 on an extended keyboard or map it (see below) 
  • F20: Shift F12 or Alt F15 
Some of the above work for iterm2. Here are alternate mappings:
  • F11 can be accessed with Control F11 
  • F20 will need to be mapped to a key of your choice using escape sequence [34~ 

Mapping Other Keys

Other keys can be mapped within Terminal.app or iterm2 by making a profile.
  1. For Terminal.app: 
  2. Open a terminal 
  3. Go to the Terminal menu, Preferences. 
  4. Add a new profile with the + button at the bottom left. 
  5. Name it 'OpenVMS'. 
  6. On the Text tab, adjust the colors so you can differentiate it from your other terminal windows. 
  7. On the Window tab, adjust the Window Size to 132 Columns if your terminal apps support this width. 
  8. You may need to enable the keypad mode to get access to LSE's navigation keys on the keypad (PF1+4 or 5 to seek to the bottom/top). 
On the Keyboard tab, you can add mappings to individual keys that OpenVMS needs for navigation. This is useful in LSE or other text editors. Choose a Key to map and then enter a mapping. Mappings are entered by typing a control character `Ctrl + [` (will appear as \033) followed by some additional keystrokes. The following mappings have been found and are based on this Google Groups thread:
OpenVMS Key Key Action
FIND Home \033[1~
PREV PgUp \033[5~
NEXT PgDown \033[6~
SELECT End \033[4~
F19 ^ F9 \033[33~
F20 ^ F10 \033[34~

For iterm2, use Profiles:

  • Use similar escape sequences for the FIND and similar keys as above. On Profiles, Keys tab: add a hotkey and select "Send Escape Sequence" for the action. Omit the \033 from the table above. FIND end up as "Send [1~". 
  • Enable Keypad mode for navigating in LSE. Profile, Keys, keypad mode checkbox. This only works for extended keyboards.

Tuesday, June 12, 2018

Quotes from Dan Kaminsky's Keynote at DEF CON China


Above is Dan Kaminsky's keynote at the inaugural DEF CON China.  It was nominally about Spectre and Meltdown, and I thought it was immediately applicable to testing at all levels.  Here are some moments that jumped out at me:

On Context:

"There's a problem where we talk about hacking in terms of only software...What does hacking look like when it has nothing to do with software." 1:55

"But let's keep digging." Throughout, but especially 5:40

"Actual physics encourages 60 frames per second. I did not expect to find anything close to this when I started digging into the number 60...This might be correct, this might not be. And that is a part of hacking too." 6:10

"Stay intellectually honest as go through these deep dives. Understand really you are operating from ignorance. That's actually your strong point. You don't know why the thing is doing what it is doing...Have some humility as you explore, but also explore." 7:40

"We really really do not like having microprocessor flaws...and so we make sure where the right bits come in, the right bits come out. Time has not been part of the equation...Security [re: Specter/Meltdown] has been made to depend on an undefined element. Context matters." 15:00

"Are two computers doing the same thing?...There is not a right answer to that. There is no one context. A huge amount of what we do in hacking...is we play contexts of one another." 17:50

[Re: Spectre and Meltdown] "These attackers changed time which in this context is not defined to exist...Fast and slow...means nothing to the chip but it means everything to the users, to the administrators, to the security models..." 21:00

"Look for things people think don't matter. Look for the flawed assumptions...between how people think the system works and how it actually does." 35:00

"People think bug finding is purely a technical task. It is not because you are playing with people's assumptions...Understand the source and you'll find the destination." 37:05

"Our hardest problems in Security require alignment between how we build systems, and how we verify them. And our best solutions in technology require understanding the past, how we got here." 59:50

On Faulty Assumptions:

"[Example of clocks running slow because power was not 60Hz] You could get cheap, and just use whatever is coming out of the wall, and assume it will never change. Just because you can doesn't mean you should...We'll just get it from the upstream." 4:15

"[Re: Spectre and Meltdown] We turned a stability boundary into a security boundary and hoped it would work. Spoiler alert: it did not work." 18:40

"We hope the design of our interesting architectures mean when we switch from one context to another, nothing is left over...[but] if you want two security domains, get two computers. You can do that. Computers are small now. [Extensive geeking out about tiny computers]" 23:10

"[RIM] made a really compelling argument that the iPhone was totally impossible, and their argument was incredibly compelling until the moment that Steve Jobs dropped an iPhone on the table..." 25:50

"If you don't care if your work affects the [other people working on the system], you're going to crash." 37:30

"What happens when you define your constraints incorrectly?... Vulnerabilities. ...At best, you get the wrong answer. Most commonly, you get undefined behavior which in the presence of hacking becomes redefinable behavior." 41:35

"It's important to realize that we are loosening the assumption that the developer knows what the system is supposed to do...Everyone who touches the computer is a little bit ignorant." 45:20

On Heuristics

"When you say the same thing, but you say it in a different time, sometimes you're not saying the same thing." 9:10

"Hackers are actually pretty well-behaved. When hackers crash code...it does really controlled things...changing smaller things from the computer's perspective that are bigger things from a human's perspective." 20:25

"Bugs aren't random because their sources aren't random." 35:25

"Hackers aren't modeling code...hackers are modeling the developers and thinking, 'What did [they] screw up?' [I would ask a team to] tell me how you think your system works...I would listen to what they didn't talk about. That was always where my first bugs came from." 35:45

On Bug Advocacy

"In twenty years...I have never seen stupid moralization fix anything...We're engineers. Sometimes things are going to fail." 10:30

"We have patched everything in case there's a security boundary. That doesn't actually mean there's a security boundary." 28:10

"Build your boundaries to what the actual security model is...Security that doesn't care about the rest of IT, is security that grows increasingly irrelevant." 33:20

"We're not, as hackers, able to break things. We're able to redefine them so they can't be broken in the first place." 59:25

On Automation

"The theorem provers didn't fail when they showed no leakage of information between contexts because the right bits went to the right places They just weren't being asked to prove these particular elements." 18:25

"All of our tools are incomplete. All of our tools are blind" 46:20

"Having kind of a fakey root environment seems weird, but it's kind of what we're doing with VMs, it's what we're doing with containers." 53:20

On Testing in the SDLC

"We do have cultural elements that block the integration of forward and reverse [engineering], and the primary thing we seem to do wrong is that we have aggressively separated development and testing, and it's biting us." 38:20

"[Re Penetration Testing]: Testing is the important part of that phrase. We are a specific branch of testers that gets on cooler stages...Testing shouldn't be split off, but it kinda has been." 38:50

Ctd. "Testing shouldn't be split off, but it kinda has to have been because people, when they write code, tend to see that code for what it's supposed to be. And as a tester, you're trying to see it for what it really is. These are two different things." 39:05

"[D]evelopers, who already have a problem psychologically of only seeing what their code is supposed do, are also isolated from all the software that would tell them [otherwise]. Anything that's too testy goes to the test people." 39:30

"[Re: PyAnnotate by @Dropbox] 'This is the thing you don't do. Only the developer is allowed to touch the code.' That is an unnecessary constraint." 43:25

"If I'm using an open source platform, why can't I see the source every time something crashes? ...show me the source code that's crashing...It's lovely." 47:20

"We should not be separating Development and Testing... Computers are capable of magic, and we're just trying to make them our magic..." 59:35

Misc

"Branch Prediction: because we didn't have the words Machine Learning yet. Prediction and learning, of course they're linked. Kind of obvious in retrospect." 27:55

"Usually when you give people who are just learning computing root access, the first thing they do is totally destroy their computer." 53:40 #DontHaveKids

"You can have a talent bar for users (N.B.: sliding scale of computer capability) or you can make it really easy to fix stuff." 55:10 #HelpDesk
"[Re: Ransomware] Why is it possible to have all our data deleted all at once? Who is this a feature for?!... We have too many people able to break stuff." 58:25

Sunday, June 10, 2018

Postman Masterclass Pt. 2

During my second Postman meetup as part of the Las Vegas Test Automation group, we were able to cover some of the more advanced features of Postman. It's a valuable tool for testing RESTful services (stronger opinions on that also exist), and they are piling on features so fast that it is hard to keep track. If you're a business trying to add automation, Postman is easily the lowest barrier to entry to doing so. And with a few tweaks (or another year of updates) it could probably solve most of your API testing.

The meetup covered the Documentation, Mock Server and Monitor functionality. These are pieces that can fit in your dev organization to smoothe adoption, unroadblock, and add automation with very little overhead. Particularly, the Mock servers they offer can break the dependency on third party integrations quite handily. This keeps Agile sprints moving in the face of outside roadblocks. The Monitors seem like a half-measure. They gave a GUI for setting up external monitors of your APIs, but you still need Jenkins and their Newman node package to do it within your dev env. The big caveat with each of these is that they are most powerful when bought in conjunction with the Postman Enterprise license.  Still, at $20 a head, it's far and away the least expensive offering on the market.

Since the meetup, I've found a few workarounds for the features I wish it had that aren't immediately accessible from the GUI. As we know in testing in general, there is no one-size fits all solution.  And the new features are nice, but they don't offer some of the basics I rely on to make my job easier.  Here is my ever-expanding list of add-ons and hidden things you might not know about.  Feel free to comment or message me with more:

Postman has data generation in requests through Dynamic Variables, but they're severely limited in functionality. Luckily, someone dockerized npm faker into a restful service. This is super easy to slip stream into your Postman Collections to create rich and real-enough test data. Just stand it up, query, save the results to global variables, and reuse them in your tests.

The integrated JavaScript libraries in the Postman Sandbox are worth a fresh look. The bulk of my work uses lodash, crypto libraries, and tools for validating and parsing JSON. This turns your simple requests to data validation and schema tracking wonders. 

  • Have a Swagger definition you don't trust? Throw it in the tv4 schema validator. 
  • Have a deep tree of objects you need to be able to navigate RESTfully? Slice and dice with lodash, pick objects at random, and throw it up into a monitor. Running it every ten minutes should get you down onto the nooks and crannies.
This article on bringing the big list of naughty strings (https://ambertests.com/2018/05/29/testing-with-naughty-strings-in-postman/amp/) is another fantastic way to fold in interesting data to otherwise static tests. The key is to ensure you investigate failures. To get the most value, you need good logs, and you need to pay attention to your results in your Monitors.

If you have even moderate coding skills among your testers, they can work magic on a Postman budget. If you were used to adding your own libraries in the Chrome App, beware: the move to a packaged app means you no longer have the flexibility to add that needed library on your own (faker, please?).

More to come as I hear of them.

Monday, June 4, 2018

27th Quantity Surveyors

The 27th Surveyors are part of the ground units on the death world Pholos IV.  They remove invasive vegetation from settlements, patrol for xenos incursion, and quell uprisings among the rowdy recolonists.

The deep green robes with shocking yellow lining mirror the vegetation that sprouts, grows tall, and flowers often in the same day. The temperate latitudes pulse with these colors when viewed from space.

The surveying contingent can rely on support from giant machines. Knights with claws and saws for felling overnight forests, Onagers specially modified to traverse the decaying swamps thick with fungal rot,  resurface roads thick with cracks only weeks after being layed down, and pulp the lignin and experimental protein bioheresy into specialized oils and unguents. The ruin wrought by the mad Biologis would be turned to the might of the Mechanicum at last.

Saturday, June 2, 2018

Fixing Ford AC Head Controller Vacuum Problem

The AC on my land yacht (2009 Mercury Grand Marquis) has been in the fritz for a while. Last winter, it gradually stopped switching from max AC/recirculate (a necessary in Vegas), then got stuck on norm AC until it rested on Defrost/Floor. I was able to fix it with some basic troubleshooting, YouTube sleuthing, and two bucks in o-rings.

This shaky yet informative video by Ian Smith helped me diagnose it as a problem with vacuum only. The AC itself was fine. It blows cool air all day long. It just did so at the windshield. It couldn't be the blend-door actuator.

The same video showed me how to diagnose the vacuum problems. The black hose providing vacuum from the engine seemed fine: I was getting 20 inches of vacuum with the car turned on when I hooked up a bleed pump with a gauge (mine came from Harbor Freight, shown in the video). To test the actuators, all I had to do was hook a 'jumper' pipe from black to the other pipes. Each one seemed to hold air, and the actuators sprang to life once again. For the first time in a year, I had cold air blowing from the vents. The problem couldn't be in the lines. I pulled the controller head for a closer look.

The head itself is a bunch of electronics, a control panel, and one removable plate with four solenoids. The vacuum hoses come into this through a manifold, and the head controls trigger the solenoids to route vacuum from the black hose to the others. This triggers different actuators under the dash. Something was amiss in the manifold.

I returned to YouTube looking for rebuild instructions. I found this extremely helpful video from a Chicago mechanic. The solenoids contain an o-ring that dries out, wears out, and loses the ability to hold vacuum. I obtained close to the recommended o-rings from Lowes (#36, 5/16 OD, 3/16 ID, 1/16 thickness) as I was not willing to wait for Amazon. A little Oatey silicone lubricant made the tight squeeze work a little better. I found I had to seat the solenoid heads at least once before total reassembly. It was too difficult to do so at the end and fight with the other small parts at the same time. 45 minutes later, I had full control of my AC restored.

I can't believe it was this simple to fix the controller. I think I was intimidated by the AC (having spent $1500 last year to have the dealer redo the whole system from seals to refrigerant). I didn't want to break anything. A few targeted troubleshooting steps helped assuage any fears of irreparable harm, and now I have a comfortable cabin once again.

Wednesday, May 2, 2018

The Best Thing About Hero Quest...

...Are the Miniatures!
These were painted in the early 90s with my brother using craft paint while at my grandparents' cabin near Navajo Lake. They currently reside in the closet awaiting the cousins to kick in the doors with my son.

Apologies to BardicBroadcasts https://youtu.be/Cx8sl2uC46A

Thursday, April 5, 2018

The Glowhawk: OFBC Gaiden

Update: Plume added, rebuilt using thread and pipe cleaners to keep it upright and separate strands, removed plasticard sticks and zip ties.  Also pulled the EL Wire which is being repurposed in my son's EL Hoodie.



We were only able to print a few OFBC 2.0 cases before DEFCON 26. The leftover parts would have sat in my toolbox for quite a while if not for a serendipitous mistake: I ordered the wrong color LEDs from Sparkfun. This plus a little construction advice from a seamstress helped me cobble together the glowing headgear that is The Glowhawk

My courage and thinning hair prevents me from getting a mohawk while at DEFCON, but I've always wanted one. Instead I started to create one with a networking theme. Pipe cleaners in the color of Cat6 twisted pair served as a thick mane anyone could be proud of. This was wired onto a hat as a test. It looked OK, but it was kind of stubby to wear all on its own.

The LED driver for the OFBC is overdone. A single charge can last 10 hours on the original model. I wondered how much it could handle in terms of output, and a little breadboarding showed me I could wire several of the LED modules together as long as they were in parallel. Now how to use them?

The LEDs are these 3W green modules with attached heat sink. Direct eye contact is not recommended (hence the pains we took to use momentary buttons on the OFBC). On the beer light, we diffused the over-bright light so it could be sculpted by the drink it passed through. I was inspired by a fiber optic dress I saw elsewhere and found fiber optic table centrepieces for dirt cheap on Amazon. Some hot glue joined the disassembled fiber optics to the bright LED. The mane of glowing green was born!

With this fresh take in hand, DEFCON was upon us. I packed my things and thought I might take a crack in the evening. The Richard Cheese show was the perfect venue to solder everything together. The_bozo and I found a better place to work where the hot glue gun could run safely. I transferred the existing Cat6 mohawk to a bright green John Cena hat from Goodwill. Inside the channel that ran between upturned pipe cleaners, I hot glued the modules and fiber optics. Zip ties kept the fiber bundles from flopping around too much.

I consider the Glowhawk a great success, if a tad impractical. It lasted about 2 hours on a charge, and I was able to walk around wearing it with the mobile party crew for that long before it got uncomfortable.  A photo of me wearing it hit the DEFCON Closing Ceremonies, and my son keeps trying to steal my remaining fiber optics for a lamp in his room.

Future improvements include better internal support, googly eyes to cover the logos, and a fifth plume to fill out the front. See you next year!





Tuesday, April 3, 2018

Urns

My father passed late last year, and I made three nondescript urns as keepsakes for family and friends. It was the first time I made a box of any respectability since 2000.  I hadn't originally planned to make them when he passed, but making them helped me process things in a difficult time.

I was the responsible party for my father's estate as his wife does not speak English very well. As such, it fell to me to arrange the funeral, notify friends, and start to organize his affairs. I kept it together. The arrangements were made, the bills were covered, and all in a few days. I kept it together, that is, until I tried to return to work. I got ready. I even got in my car to go. But I could not. Instead, I went into the shop and executed a simple design for holding a portion of his ashes.

The material is Indian Rosewood (the same that I used for the magnetic bottle openers). The strong grain made mitered corners a natural choice. I even had enough contiguous grain to try to book-end most sides. I didn't have a keyed or splined miter jig (which could have strengthened the corners), but I figured the lid and bottom would provide a good brace against failure.

Dimensioning the lumber wasn't very difficult; it was the geometry of the corners that caused me real trouble. I left the sides thick to give each box some heft. I eyeballed the lid thickness and shaved down some beautiful figured grain to just the right height (maybe I overshot it a little and had to clean it up later). When I got to cutting the miters, I found that I didn't have any accurate way to match them up. The miter saw was definitely not accurate from cut to cut. I lost a lot of material on the table saw trying to get a canted blade to just the right angle. I finally settled on using my miter sled. I had to cut the sides down a bit to make sure I could make the entire cut in one pass. By the end of this therapeutic day, I had three roughly identical boxes ready for glue-up.

The second half took a few more months to pull off. Uncertainty about the accuracy of the cuts lead me to put the project on hold. Should I delay and try to true then with a shooting board? My girlfriend gave me the most wonderful advice once: when you find yourself rushing a project, put it down and come back later. The parts to three urns marinated on the bench and in my mind for a few months.

A test fit in March didn't seem too bad. The time off convinced me to persevere and get them together. I discovered too late that I mixed up the orientation of the edges. My careful bookends were a jumble on two of the three boxes. However, the imperfect corners and dimensional problems worked to hide the errors amongst each other. Sanding trued up protruding tear-out and splinters without obvious rounded-off corners. Finally, dark stain and some paste wax finished the work of hiding imperfect joints in dark recesses and shiny polished surfaces.

I finished the bottom with plywood. If I had to pick a spot where I'm uncertain about my choices, it's here. Glue is strong, but how will the baltic birch bottom hold up over time? I'm thinking of throwing in some brads there just in case. The bottom served as a canvas whereon I could memorialize my father. I was able to burn the message "Invictus Maneo", the Armstrong Clan (and our ancestral) family motto. Loosely translated, it means, "I remain unconquered."

This entire project was an object lesson in how I'm still learning some of the most basic techniques in woodworking. I need a way to clean up miters that start on the saw. A shooting board or similar has been recommended. Fine adjustments on my existing miter sled might also work. Though it didn't seem too bad once finished, the tearout for certain cuts makes me think I have a dull blade. I'll have to investigate, tune, and try again.

I think I've worked through a phobia of complex geometry. Something my father always talked about is how to hide your mistakes in woodworking. Bookends, miters, and a fitted lid left precious room for that, but I found a few tricks along the way such as meticulous test fitting, blue tape as clamps for difficult pieces, and patience above all. Regardless, I'm looking forward to the next boxes I build. I hope those have a markedly different emotional footprint.

OFBC 2.0

For Toxic BBQ 13 (DEFCON 25), we returned to the OFBC to see if we could improve the design and add some needed table decorations.


The first step was to simplify the PCB creation. I created a new layout in Fritzing that reduced whitespace. It also moved off-board components like batteries and the LED modules to use JST connectors for easy installation and swapping.  OSH Park did a great job with the PCBs. I was able to directly convert the Fritzing designs to printable format. Each board was less than 2 bucks by the time we finished. Never again will I make my own PCBs by hand. 


Sparkfun supplied most of the same components for about 15 bucks per light. Here is an updated BoM for this case:

Next, we redesigned the case. Instead of a three piece design requiring glue to assemble, the two pieces would be a base and a lid with a logo. Everything could be screwed into designed posts and covered with the lid. It was a snap. Production was easier with Shapeways. However, this lead to had longer lead times that prevented us from delivering to the barbecue. The prototyping went well and matched the designs, but the mass printings were so delayed that they didn't arrive in time for the barbecue even with expedited shipping. The resin product looked much better than the filament-printed 1.0 model. The cost at 20 bucks or so each was not prohibitive, but it certainly wasn't mass-market ready.




Design Pics






Updated Lid Design for Toxic BBQ 2018

Friday, March 30, 2018

Hardback Review

Received my Hardback copy that I Kickstarted. Having played Paperback once or twice, the thought of an evolved version piqued my interest. Here's an initial take after an unboxing and one game: I loved this playthrough, and I'm looking forward to the next one!

Press your luck - In most deck builders, your hand is your turn. A bad draw leads to an unsatisfying nothing turn. Not so with Hardback. Use ink and remover to turn a bad draw into a high dollar buy. I like that drawing cards (a coveted power in Dominion and Star Realms) is a core mechanic rather than in the soup of card benefits.

The power of not buying - In most deck builders, you need to buy something by the end of your turn. Use it or lose it. Adopting this in Hardback can lead to filling your deck with trash letters (L. Ron Hubbard syndrome) and a perennial lack of money. Instead, go for quality by NOT buying cards and instead buying ink. This allows you to play ACROSS turns by saving up ink, drawing all your cents next turn, and dropping it on that sweet consonant or Perennial Classic with great benefits. Once we realized this, play went a lot faster as we could buy high cost cards. We just had to wait and plot and plan.

A Game About Words - Our first game was with 3 people: one experienced gamer, one apathetic adult, and one ADHD preteen. I was worried that, like Scrabble, the word part of the game would turn players off. Instead, it provided fun stories and interesting interactions. You can turn any card into a wild and lose its benefits, so using all your cards can help you, but not as much as the above pressing your luck. I played two words that mean 'toilet' while my tongue-tied son bought "reveal adjacent wild" cards so he could focus on getting points.

Surprisingly and Pleasingly Interactive - There are no attack cards as a benefit in the standard game, but that doesn't mean it's not interactive. When in doubt, Just give up: Ghost Writer let's you play open-hand and rely on other players. They even get ink as a benefit! For a genre rife with negative player interactions ruining games (no-attack Dominion is a staple at our house), this is a refreshing way to add positive interactions. The Perennial Classic mechanic is similarly combative but not adversarial, and the Jail benefit and ability to reset the offer row can elicit a groan or two while you foil your opponents' plans.

OMG The Design There are puns everywhere, the cards are complex but not cluttered, the cardstock is pleasing to hold, and the use of meeples rather than tokens makes it chunkier than the small box lets on. Leaving room in the box for more cards might hint at expansions, but it comes with plenty of alternative ways to play. I can't wait to break out the player powers and co-op mode.

I know Paperback was hard to get for a while. If you have a chance and need a deck builder with less combat and more pithy reveals, get Hardback.

Link to Reddit discussion: https://www.reddit.com/r/boardgames/comments/7tlini/hardback_first_impressions

Inquisitor Eisenhorn

Recently finished painting the Inquisitor Eisenhorn 30th Anniversary figure. As he was one of my father's favorite characters from Dan Abnett's 40k works, he will lead the reliquary squad to guard his urn in my display case!  Most of the techniques are standard, but I learned two things.

The first is that faces are really difficult without the right colors. I couldn't get the blending right with the washes and pots I had. The end result was muddy and pale. I touched it up after some research, and he looks better as a result. The hooded eyes ensure that the genetic anomaly called Private Dickard Syndrome doesn't affect Eisenhorn too. A little grey dry brushing on his chin gave him the 5 o'clock shadow and a little depth to match his hair.

The second bit of learning was around highlighting armor. Because he has so little, I didn't get sick of it and give up. The teal shoulder pads were a dream. They are a very simple highlight that allowed me to build up a rich color. The sharp white highlight was carefully applied, and it makes it look shiny without having to apply a lustrous enamel. I like it so much that the rest of the reliquary squad will have this color on their Tempestus breastplates.

Overall, I like one shot characters like this to learn new techniques. And this figure has enough detail to try many more. I particularly enjoyed the base with its cracked emblem and shiny brass.

Tuesday, March 20, 2018

Behat AfterScenario, PHP Garbage Collection, and Singletons

In Behat, I added a singleton to our contexts to store things across scenarios, but I ran into trouble when trying to keep separation between my tests.  The storage object allowed me to be creative with builders, validators, and similar ways of reducing repetition and making the PHP code behind easier to read.  There was a problem though: it would randomly be cleared in the middle of a test.

The only thing I knew was the object would get cleared at relatively the same time.  I had a set of about 50 different tests in a single feature.  This would call an API multiple times, run validations on the responses, and then move on to the next test.  All the while, it would put information into the storage object.  The test would not just fail in the middle of a scenario, it would generally fail near the same part of a scenario every time.  it was timing, an async process, or something was clearing a logjam.

While designing the storage object, I had the bright idea to clear it with every scenario.  The singleton acts like a global variable, and a clear after each one would ensure data from one test didn't pop up in another.  To make sure i was running this at the last possible moment, I put the clear into the __destruct() method of my context class.  By putting the clear in the destructor, I gave PHP permission to handle it as it saw fit.  In reality, it sometimes left my scenario objects to linger while running the next (due to a memory leak or similar in Behat itself, or a problem in my code; I couldn't tell).

/**
 * Destructor
 */
public function __destruct()
{
 ApiContextStore::clear();
}


I first stopped clearing the store and the bugs went away.  Whew!  But how could I make sure I wasn't contaminating my tests with other data and sloppy design?  I tried two things:

1) gc_collect_cycles() forces the garbage collector to run.  This seems to have the same effect of stopping the crashes, but it was kind of a cryptic thing to do.  I had to put it in the constructor of the Context rather than something that made more sense.

/**
 * FeatureContext constructor.
 */
public function __construct()
{
 /**
 * Bootstrap The Store
 */
 gc_collect_cycles();
 ApiContextStore::create(); // Creates an instance if needed
}
2) Putting in an @AfterScenario test provided the same protection, but it ran, purposefully, after every test was complete.  I'm not freeing memory with my clear, so relying on garbage collection wasn't a priority.  I just needed it to run last.


/**
 * @AfterScenario
 *
 * Runs after every scenario */
public function cleanUpStore()
{
 ApiContextStore::clear();
}

http://php.net/manual/en/function.gc-collect-cycles.php
http://docs.behat.org/en/v2.5/guides/3.hooks.html

Monday, March 5, 2018

Postman Master Class Pt. 1

I gave a talk on using Postman while testing. We covered the UI, creating a collection, working with environment variables, and chaining tests with JavaScript.

A big surprise from this talk was how few testers knew about Postman to begin with. When I first started testing websites, I wanted a more reliable way of submitting http requests. The ability to save requests got me out of notepad and command-line cURL. The move to microservices only made it more useful to me.

By far, the biggest discovery was how many testers there were that had never explored its signature features. Environments and scripting make the instrumentation of integration testing almost effortless. Organizations that want automation but don't want to give the time can turn simple tests into bare bones system tests for very little further expense.

I'm planning a Part 2 where I can talk about Newman, the command line collection runner. I also want to demonstrate the mocking and documentation features. If a company adopts their ecosystem, it has the potential to make a tester's life much easier.  Even if it's only a tester's tool, it can help them communicate better with developers and reach into the product with greater ease.

Slides

Monday, February 19, 2018

Kill orphaned processes with awk

I use Behat to create some ham-handed load scenarios fairly regularly.  PhpStorm can help me spin up to get concurrency if I want a crushing load.  But PhpStorm on Mac seems to disconnect from these running tests if I lock my computer or leave them to run too long.  Killing them is a pain.  I have to run `ps`, find the pid, then kill it.

When the list gets too long, I like to use awk to comb through the list and kill anything that is found.  It's easy to search and parse out the tokens like so:


ps -ef | awk '/behat/ {system("kill -9 " $2)}'

ps -ef delivers a verbose list of processes.  This is piped to awk where I can specify a command using awk's scripting language.  In this command, I first search for 'behat'.  Then I  run a command pulls the second token, the process ID from each line of the result and inserts it into the `kill` command.

Monday, January 8, 2018

Fedora 27 Upgrade Issue - Solved by Removing yum-utils

Had a #meltdown upgrade to Fedora 27 stymied by a package conflict:

Error: Transaction check error:
  file /usr/bin/debuginfo-install conflicts between attempted installs of yum-utils-1.1.31-513.fc27.noarch and dnf-utils-2.1.5-1.fc27.noarch
...

I got around this by removing yum-utils.  It then allowed me to move on to the upgrade.  Got this idea from similar threads online.  Hoping anyone that runs into this after me will find this useful.

Here's to hoping the jump from 23 to 25 will be as significant as the one from 25 to 27.