Showing posts with label Tips. Show all posts
Showing posts with label Tips. Show all posts

Wednesday, May 20, 2020

Learning AWS - Reflections after a Year in the Cloud

In 2018, a new job for me meant a new tech stack: AWS. Regardless of how long you’ve been developing software, new infrastructure can make you feel like you're starting from scratch. Jumping from a company with a cold room full of mainframes to somewhere cloud native was a shock, but I've enjoyed learning this wide world of cloud^h^h^h^h^hsomeone elses computer. If you feel like a cloud n00b, this post collects tips and tricks for learning cloud development from zero.

As with everything, pace yourself when trying to understand AWS and how to use it. If you feel blocked, put down one service and try another. I have found my happy path is a mixture of study, practical labs, poking around company infrastructure, and handling support rotations. Each contribute, in the long-run, to understanding the available services and building effective products upon them.

The Basics - AWS Vocabulary

The Cloud - Someone else’s computer. Keep this in mind when learning about AWS. It’s all just servers in a data center somewhere else. AWS may take care of a large or small portion of managing these computers for us, and they charge a large or small fee for the privilege.

Identity Access Management, IAM - Amazon’s method of controlling access and permissions to AWS resources. Users can have multiple IAM roles. EC2 Instances use IAM roles. Policies rely on IAM roles to allow/deny access so you only make resources available to those that need to access it.

Regions - A set of AWS data centers that are geographically related but operationally separate. Resources, accounts and VPCs can occupy a specific region.

Availability Zones - Each Region has at least three AZs. Each AZ is a data center separated from others within a specific Region. Each have independent power, cooling, and compute resources to enable you to add fault tolerance to your applications. If internet connections or power to one AZ goes down, you should be able to launch resources in the remaining AZs to compensate for the outage.

Fully Managed Service - AWS services that are fully-managed handle scaling, replication, fault-tolerance and latency without you needing to consider it. A big one is managed Elasticsearch clusters. All you need to do is specify a few parameters and AWS configures the rest (for the most part). Though you don't have to do nearly as much management, learning how to tune managed services is still up to you to solve.

EC2, Elastic Compute Cloud - Virtual machines you can launch on a whim, using the OS you desire, configuring them as you please. This is the backbone of AWS's successes. EC2 is the opposite of fully-managed services. AWS gives you the box, and you do the rest.

Learning Resources

AWS has a host of resources available to help you to learn what options are available. If you’ve never worked with a cloud provider before, I suggest taking some of their video training for Cloud Practitioner Essentials. Login with an Amazon (not AWS) account at https://www.aws.training/. Some trainings include labs that walk you through how to start your own instances, marshal AWS resources, and build a thing for yourself in the cloud. Pick something that matches your skill and engagement level, or use their workshop syllabus to self-guide training.

One of the best ways to learn cloud infrastructure is by doing. AWS offers a massive amount of services at a free-tier. Small VMs, hours of lambdas, and lots of S3 space can be used to learn a service without paying a dime to Amazon. YouTube tutorials about services often are built specifically to never breach free-tier levels of usage. Take advantage of this if getting your hands dirty helps you learn the best. Various online learning companies have video training and integrated quizzes/tests. Some have labs that rely on the free-tier of AWS so you can learn at basically no charge. If you're learning for work, talk to your manager about supporting a subscription if you have a specific avenue of study you want to go down:

If you’re a book person, AWS sponsors official study guides for each certification they offer. These can go out of date fairly quickly, but even an old version will help you get your feet wet when using a prominent service (DNS is DNS, and a Route 53 study guide will be largely applicable next year as last). Check the public library for a guides that will be applicable even if they aren't current. Find a slack channel at work or speak with experienced engineers. Context from experience can break a logjam of misunderstanding faster than reading the AWS docs for the fifth time.

Certifications

The AWS certifications are not required to work with cloud resources, but they can be a big boost to your confidence. If certifications and tests are your preferred method of study, here are a few lines that have been recommended:

  • AWS Cloud Practitioner Essentials - Good overview of AWS resources, administration, security, and budgeting. Take this if you’ve never used cloud resources before and want to come up to speed fast. Available as a series of videos with a free online test for certification.

  • AWS Solutions Architect - This is another broad level of study that can be useful after studying Practitioner. It offers a good overview of current offerings at AWS. You might use some, others not so much. Sometimes it feels like a sales pitch for their managed services, but the curriculum is useful for determining what is possible during the initial phases of a project. The multi-tiered certifications offer a learning path that can scale to your experience and career trajectory.

  • AWS Certified Developer - A deep dive on developing with AWS, the Developer cert study can be helpful in learning how to build on AWS as a developer. The practical labs and study areas cover some of the same problems you might have to solve every day in taking an idea from concept to supportable, sellable, product. This set of certs is also multi-tiered, and it can scale with your own experience if you feel like you need a fresh challenge.

  • AWS Certified SysOps Administrator - Another deep-dive learning path that can help understand how to configure, secure, and economize cloud resources. Covers management and tooling available to keep a cloud running smoothly and safely without breaking the bank. Also has multiple tiers of certification.

Yarn Pet Mod - Platform for One Pound Cakes



My roomate has been picking up knitting and expanding their crochet skills during the pandemic Stay at Home orders.  As a part of their stimulus, they bought a Yarn Pet from Nancy's Knit Knacks.  They have also acquired a yarn ball winder that claimed to be able to do one pound skeins.  The curlicue tensions the yarn as it unwinds from the outside of the cake.  The platforms that came with it were thin circle platforms afixed to a smooth metal spindle with stops and set screws (you can see the spindle and stops above).  The platform holds the cake above the base at the appropriate height for the curlicue.  Small cakes?  Set it high.  Big cake?  How low can you go!

When they actually tried to use the Yarn Pet with the largest cakes (Caron One Pound FTW!), the little platform circles that came with the pet allowed the cake to slump and sag.  The cake would also rub against the curlicue and made it hard to pull.  They were worried about the yarn slipping below the edge and tangling under the cake.

To fix this, I used a board as wide as I could get and made it a circle:
  1. Found a home depot pine board in my scrap bin that was 5 3/4" wide.  Solid wood is preferable to plywood which can get splintery and snag the yarn.  Avoid knots if at all possible.
  2. Cut length to match width.
  3. Find the center by marking two lines from corner to corner
  4. From center, use a protractor to mark 22.5 degree increments to the edge.
  5. Drill a hole in the center mark.  To fit the Yarn Pet spindle, I needed a bit with a width  7/32".
  6. Using a table saw with miter gauge set to 45 degrees or a miter box, cut your square into an octagon
  7. Test your new platform on the spindle.  My square was about a quarter inch too wide at the widest point, but it had plenty of play between a flat side and the curlicue.  I knew trimming it again would allow it to spin freely.
  8. I trimmed my octagon into a hexadecagon by setting my gauge to 22.5 degrees.  (Towards the end of the piece, the side touching your miter gauge will be incredibly small.  Keep a firm grip, and beware of kickback!)
  9. Sand the tarnation out of every surface with 150 up to 220 grit.  You can see in the picture above that I rounded every edge and corner.  I chose not to finish the wood, but I can always go back and do this between knitting projects.

Things learned:
  • I thought the thickness of the platform might be an issue, but it turned out to be perfect for giant cakes. The added thickness prevents the platform from wiggling on the spindle.  You can plane down your board to match the included platform circles, but then I might be worried about their integrity.  As is, the yarn comes off cleanly with the center-line of the cake coming just above the curlicue.  So smooth...
  • When putting the largest cakes on the pet, use the rubber stoppers for spindle-wound skeins to keep the cake centered on the spindle.  This will prevent wobbling due to a loosening center as it is pulled from side to side.
  • If you have a circle of the appropriate width and thickness already, all you need to do is find the center and drill it.  Couldn't be simpler.

Thursday, April 23, 2020

Taming an AnyCubic Kossell Pulley 3D Printer

Quick post to note how I got my Anycubic Kossel Pulley basically working.  It took me forever to find how to do some of this, and I know I will forget it if I do not write it down.

  • Use DaHai's configuration video for starters.
    • Upgrade the firmware to Marlin 1.1.9.  I ended up using 1.1.9.1 as of this writing.
    • Use DaHai's files and modify them to work with stock Steppers.  Use Arduino IDE to load the firware after replacing Configuration.h and Configuration_adv.h (which I did not make changes to).  Here are the changes I made to his Configuration.h:
      • Line 624-626: Change these from his upgraded TMC2130_STANDALONE to stock A4988
      • Line 705: I got crazy loud stuttering when first descending to the bed during a print.  Lower this to get rid of that.
      • Line 868: I and several people online have measured and gotten good resulting prints with the Type 2 Probe Offset at -15.88
      • Line 938 to 940: These need to be true for stock steppers.  DaHai's steppers did not need to be inverted.
      • Line 1358-1364: Define your temperature presets. I have used PETG to great success with a preheat of 70C for the bed and 230C for the hotend.  This rises to print at 80C and 245C respectively during the print.
    • When following the leveling instructions, the video shows a "Set Delta Height" option that is absent in the version of the firmware I loaded.  This caused me no end of headaches later when the method of subtracting the bed distance from both the Z-Height and Probe Offset produced weird math and never worked properly.  Instead, I ran auto-calibration, saved the settings, then:
      • Noted my Z after going to Prepare -> Auto Home
      • Brought the nozzle to the bed using Prepare -> Move Axis -> Move Z until a business card wouldn't move when squished between the axis and the bed.  I then noted the height
      • Changed my Z height only by this amount by subtracting the number from the Z height, and a negative Z Height is thus added.
      • Saved and Auto Homed
      • Set my Probe Offset to 15.88 per recommendations online.
      • Checked it again and only touched the Z Height when it was off.  Repeat the Z height move if this is still not right.
  • With the printer calibrated, it was time to print.  I just used Cura because I couldn not get Slic3r or Pronterface to work easily.  Cura does not have the Kossel in it by default, but it can be easily added.  JDHarris on Thingiverse even shared the configuration file they made which can be picked up by Cura after a restart.
  • I printed with PETG which has a high temp but no fumes.  I found hairspray for adhesion worked best thanks to several awesome tips by people connected with the PDX hacker community.  Thanks all!
After this, it just worked and keeps working.  It's magical what a little math and open source firmware will do.  That being said, it's my first printer.  It is bound to break in ways I can't even imagine now.  First order of business?  Print things that make the printer better, as is tradition.

Update: Not all is well in Whoville.  I've developed some Heat Creep with this PETG printing at 245C, and I haven't had the time to troubleshoot it.  Wish me luck!

Sunday, March 24, 2019

The Aviary: Huckleberry

The Aviary, Pg 404

One of the cocktails hailing from The Office, a speakeasy basement bar underneath The Aviary, this seemed simple to assemble with only one bit of complicated machinery: a sous vide.  Also, the presentation alone was intoxicating: a frothy head atop a mauve concoction? Sign me up!


I was able to obtain a chinois at a Goodwill.  The strainer and pestle separates juice from pulp and seeds.  However, the main ingredient is a clove tincture (fancy word for Everclear infused with clove). This required a sous vide as written.  As long as I've heard about them, I have never pulled the trigger on this low temperature wonder-machine (I don't have an instant pot either).  I figured it was time to lay that to rest.

There are plenty of DIY sous vide videos on the internet.  I settled on one that recommended a rice cooker combined with an industrial 110V AC temperature controller instead of a brewer's setup.  The most important part of this setup is the type of heated pot you use.  I couldn't use my crock pot, for example, because it had a digital control.  Every time the power cut off and then back on, it would not return to heating the pot.  My manual-switch rice cooker worked like a charm, however.  Then, for $20 in parts from the hardware store and $20 for the temperature controller on Amazon, I had a safe contraption through which to control my rice cooker and keep a pot of water within 2 degrees of a specific temperature for any length of time (perhaps "safe" is relative; use wire nuts and an electrical box when playing with mains, kids; the picture below shows iteration one with no cover).


The clove tincture was dead simple but extremely smelly.  $1 in bulk cloves and some Everclear got me a half dropper full of the cloviest drops the ever passed your nose. A word of warning: toasting the cloves is a horrendously smokey business.  Do this with a hood on full blast or outside.  We had to open all the windows and run for coffee.  I already had a vacuum sealer so I dumped the toasted cloves into a bag, poured on the alcohol, and dunked it into the rice cooker for an hour.  I decanted the result into an amber bottle with dropper and savored the aroma (which wasn't hard; it was everywhere).


The rest of the recipe was fairly simple.  Huckleberries don't come into season until August, so we went with blackberries from Mexico.  The syrup came together easy with a few gradually finer strainings.  6oz made 166g of juice.  Amaro Averna from Total Wine, Bombay Gin on sale, and Angostura bitters I already had on hand completed the boozy bits.  A quick trip through a shaker came out with a pink foamy pour that gradually separated into mauve and foam.  The bitters and pepper hit our nose, and the herbal hit of the drink completes it.  It's just sweet enough with off-season blackberries to be pleasant without being overpowering.  As we drank, we noticed the colors change and aromas deepen.  Very fun and dynamic drink.



A second round (can't waste syrup, after all) made with vodka toned down the herbal nature.  This will probably be the version I make for myself unless the guests are already gin drinkers.  Too close to 'too much' pine.  A friend suggested ditching the clove and replacing it by painting the glass with Chartreuse.  Either way, this seems to be a reliable cocktail to just have on hand.  Freezing berry syrup during their season in 2oz portions and the huge amount of clove tincture I have left over means it will be quick to assemble with a fun story to tell while we shake it up.

Friday, July 27, 2018

Testing Encryption - 3 years of Dan Boneh's Online Cryptography Course

Three years ago in July, I completed Dan Boneh's online cryptography course with distinction through Coursera's Cryptography 1.  Since then, I've had the opportunity to use and test cryptographic systems at work and for hobbies.  Here are a few lessons learned when testing encryption.

I have found my fair share of bugs in the crypto we chose to use at work.  I've gotten into a routine when testing encryption used for message authentication:
  • Test the same plaintext multiple times.  Does it need to be different each time?  How much of the MAC is different each time?  It might help to explore the data your hashing function spits out as it can tell you how your hash function does what it does.
  • Replay it.  How can a user abuse identical MAC'd data if they replay it at a later date?  For a different user?  Can you add items to the plaintext that will allow you to validate not only the data but the source or timeframe as well?
  • Ensure your hashes are detecting changes. Is your MAC rejected if you change the data at various places within the message?
  • Rotate the key. Do you need a hash to survive a key change?  Usually you can just regenerate the data and re-MAC it, so figure out if you really need to use MACs over long lifetimes.  They're easy to compute.
  • Generate a bunch at once.  Is performance an issue with the service?  Most hashes are built for speed, but is yours?
For each of these failure modes, I'm looking mostly for hints of weakness.  I'm expecting pseudo-random noise, but how does my brain distinguish that from almost random noise?

There are many times when you need to generate a unique but random value but don't have the space to use a GUID.  To evaluate if a solution will be "unique enough", check out the Birthday problem wikipedia page, and this table of probabilities in particular.  Find out how many possible values exist (9 numeric digits = 10^9 ~= 2^30).  Compare on the table with that value as the hash space size versus the number of times you'll be setting this value.  This will tell you if the algorithm you want to use is sufficient.  If you are making long-term IDs that can only be created once, you obviously  want the probability of collision to be extremely low.  If you can recover from a collision by creating a new transaction fairly readily, you might not need as much assurance.  Ive used this to help drive a decision to increase unique token size from 13 to 40 characters, guide switching from SQL auto-numbers to random digits to hide transaction volumes, and ensure internal transaction IDs are unique enough to guide troubleshooting and reporting.

Time and again, the past three years have taught me that cryptography must be easy for it to be used widely.  I've stayed with Signal for text messaging because it just works.  I can invite friends and not be embarrassed at its user interface.  It doesn't tick all the boxes (anonymity is an issue being a centralized solution), but it has enough features to be useful and few shortcomings.  This is the key to widespread adoption of encryption for securing communications.  Since Snowden revealed the extent of the NSA's data collection capability, sites everywhere have switched on HTTPS through Let's Encrypt. Learning more about each implementation of SSH and TLS in the course was both informative and daunting. I was anxious to get HTTPS enabled without rehosting the site on my own.  Early 2018, Blogger added the ability to do just that through Let's Encrypt.  It requires zero configuration once I toggle it on.  I can't sing its praises enough.  The content of this blog isn't exactly revolutionary, but this little move toward a private and authentic web helps us all.

Dan Boneh's Cryptography course continues to inform my testing.  The core lesson still applies: "Never roll your own cryptography."  And the second is how fragile these constructs are.  Randomness is only random enough given the time constraints.  Secure is only secure enough for this defined application.  Every proof in the course is only as good as our understanding of the math, and every implementation is vulnerable at the hardware, software, and user layers.  In spite of this, it continues to work because we test it and prove it hasn't broken yet.  I'm looking forward to another three years of picking it apart.

Tuesday, June 26, 2018

Interacting with OpenVMS on Mac through Terminal.app and iterm2

Terminal.app and iterm2 can be used on Mac interact with an OpenVMS system by setting the right profile settings and keys with escape sequences. Most keys work by default (PF1-PF4 when Fn is used to disable Mac system interactions), but it must be configured correctly to allow access to other keys commonly used in OpenVMS terminal applications (FIND, PREV, NEXT).

PF1-PF4

The F1 - F4 function keys will work as PF1-PF4 if Fn is pressed as well.
  • If your keyboard is a large Mac keyboard with a Fn key above the arrows, access PF1 - PF4 by turing off the Mac keyboard options (brightness, volume controls, etc) by holding the Fn key and pressing F1 - F4. This also works for some other function keys. Smaller keyboards will need to map F13+. 
  • If you are on a PC keyboard, you can disable the Function Keys functions in System Preferences and return them to act as F1 - F4. 
  • If you don't want your function keys to always act as F1 - F4, the program FunctionFlip can be used to change your function keys back and forth on the fly. 

Accessing Keys with Shift and Alt

Some keys are mapped, but not accessible without using Shift and Alt in combination with the above Fn key/FunctionFlip.

Here are Terminal.app configs:
  • F11: Alt F6 
  • F12: Alt F7 
  • HELP: F15 on an extended keyboard or Shift F7 
  • DO: F16 on an extended keyboard or Shift F8 or Alt F11 
  • F17: Shift F9 or Alt F12 or F17 on an extended keyboard 
  • F18: Shift F10 or F18 on an extended keyboard 
  • F19: Alt F14 on an extended keyboard or F19 on an extended keyboard or map it (see below) 
  • F20: Shift F12 or Alt F15 
Some of the above work for iterm2. Here are alternate mappings:
  • F11 can be accessed with Control F11 
  • F20 will need to be mapped to a key of your choice using escape sequence [34~ 

Mapping Other Keys

Other keys can be mapped within Terminal.app or iterm2 by making a profile.
  1. For Terminal.app: 
  2. Open a terminal 
  3. Go to the Terminal menu, Preferences. 
  4. Add a new profile with the + button at the bottom left. 
  5. Name it 'OpenVMS'. 
  6. On the Text tab, adjust the colors so you can differentiate it from your other terminal windows. 
  7. On the Window tab, adjust the Window Size to 132 Columns if your terminal apps support this width. 
  8. You may need to enable the keypad mode to get access to LSE's navigation keys on the keypad (PF1+4 or 5 to seek to the bottom/top). 
On the Keyboard tab, you can add mappings to individual keys that OpenVMS needs for navigation. This is useful in LSE or other text editors. Choose a Key to map and then enter a mapping. Mappings are entered by typing a control character `Ctrl + [` (will appear as \033) followed by some additional keystrokes. The following mappings have been found and are based on this Google Groups thread:
OpenVMS Key Key Action
FIND Home \033[1~
PREV PgUp \033[5~
NEXT PgDown \033[6~
SELECT End \033[4~
F19 ^ F9 \033[33~
F20 ^ F10 \033[34~

For iterm2, use Profiles:

  • Use similar escape sequences for the FIND and similar keys as above. On Profiles, Keys tab: add a hotkey and select "Send Escape Sequence" for the action. Omit the \033 from the table above. FIND end up as "Send [1~". 
  • Enable Keypad mode for navigating in LSE. Profile, Keys, keypad mode checkbox. This only works for extended keyboards.

Tuesday, June 12, 2018

Quotes from Dan Kaminsky's Keynote at DEF CON China


Above is Dan Kaminsky's keynote at the inaugural DEF CON China.  It was nominally about Spectre and Meltdown, and I thought it was immediately applicable to testing at all levels.  Here are some moments that jumped out at me:

On Context:

"There's a problem where we talk about hacking in terms of only software...What does hacking look like when it has nothing to do with software." 1:55

"But let's keep digging." Throughout, but especially 5:40

"Actual physics encourages 60 frames per second. I did not expect to find anything close to this when I started digging into the number 60...This might be correct, this might not be. And that is a part of hacking too." 6:10

"Stay intellectually honest as go through these deep dives. Understand really you are operating from ignorance. That's actually your strong point. You don't know why the thing is doing what it is doing...Have some humility as you explore, but also explore." 7:40

"We really really do not like having microprocessor flaws...and so we make sure where the right bits come in, the right bits come out. Time has not been part of the equation...Security [re: Specter/Meltdown] has been made to depend on an undefined element. Context matters." 15:00

"Are two computers doing the same thing?...There is not a right answer to that. There is no one context. A huge amount of what we do in hacking...is we play contexts of one another." 17:50

[Re: Spectre and Meltdown] "These attackers changed time which in this context is not defined to exist...Fast and slow...means nothing to the chip but it means everything to the users, to the administrators, to the security models..." 21:00

"Look for things people think don't matter. Look for the flawed assumptions...between how people think the system works and how it actually does." 35:00

"People think bug finding is purely a technical task. It is not because you are playing with people's assumptions...Understand the source and you'll find the destination." 37:05

"Our hardest problems in Security require alignment between how we build systems, and how we verify them. And our best solutions in technology require understanding the past, how we got here." 59:50

On Faulty Assumptions:

"[Example of clocks running slow because power was not 60Hz] You could get cheap, and just use whatever is coming out of the wall, and assume it will never change. Just because you can doesn't mean you should...We'll just get it from the upstream." 4:15

"[Re: Spectre and Meltdown] We turned a stability boundary into a security boundary and hoped it would work. Spoiler alert: it did not work." 18:40

"We hope the design of our interesting architectures mean when we switch from one context to another, nothing is left over...[but] if you want two security domains, get two computers. You can do that. Computers are small now. [Extensive geeking out about tiny computers]" 23:10

"[RIM] made a really compelling argument that the iPhone was totally impossible, and their argument was incredibly compelling until the moment that Steve Jobs dropped an iPhone on the table..." 25:50

"If you don't care if your work affects the [other people working on the system], you're going to crash." 37:30

"What happens when you define your constraints incorrectly?... Vulnerabilities. ...At best, you get the wrong answer. Most commonly, you get undefined behavior which in the presence of hacking becomes redefinable behavior." 41:35

"It's important to realize that we are loosening the assumption that the developer knows what the system is supposed to do...Everyone who touches the computer is a little bit ignorant." 45:20

On Heuristics

"When you say the same thing, but you say it in a different time, sometimes you're not saying the same thing." 9:10

"Hackers are actually pretty well-behaved. When hackers crash code...it does really controlled things...changing smaller things from the computer's perspective that are bigger things from a human's perspective." 20:25

"Bugs aren't random because their sources aren't random." 35:25

"Hackers aren't modeling code...hackers are modeling the developers and thinking, 'What did [they] screw up?' [I would ask a team to] tell me how you think your system works...I would listen to what they didn't talk about. That was always where my first bugs came from." 35:45

On Bug Advocacy

"In twenty years...I have never seen stupid moralization fix anything...We're engineers. Sometimes things are going to fail." 10:30

"We have patched everything in case there's a security boundary. That doesn't actually mean there's a security boundary." 28:10

"Build your boundaries to what the actual security model is...Security that doesn't care about the rest of IT, is security that grows increasingly irrelevant." 33:20

"We're not, as hackers, able to break things. We're able to redefine them so they can't be broken in the first place." 59:25

On Automation

"The theorem provers didn't fail when they showed no leakage of information between contexts because the right bits went to the right places They just weren't being asked to prove these particular elements." 18:25

"All of our tools are incomplete. All of our tools are blind" 46:20

"Having kind of a fakey root environment seems weird, but it's kind of what we're doing with VMs, it's what we're doing with containers." 53:20

On Testing in the SDLC

"We do have cultural elements that block the integration of forward and reverse [engineering], and the primary thing we seem to do wrong is that we have aggressively separated development and testing, and it's biting us." 38:20

"[Re Penetration Testing]: Testing is the important part of that phrase. We are a specific branch of testers that gets on cooler stages...Testing shouldn't be split off, but it kinda has been." 38:50

Ctd. "Testing shouldn't be split off, but it kinda has to have been because people, when they write code, tend to see that code for what it's supposed to be. And as a tester, you're trying to see it for what it really is. These are two different things." 39:05

"[D]evelopers, who already have a problem psychologically of only seeing what their code is supposed do, are also isolated from all the software that would tell them [otherwise]. Anything that's too testy goes to the test people." 39:30

"[Re: PyAnnotate by @Dropbox] 'This is the thing you don't do. Only the developer is allowed to touch the code.' That is an unnecessary constraint." 43:25

"If I'm using an open source platform, why can't I see the source every time something crashes? ...show me the source code that's crashing...It's lovely." 47:20

"We should not be separating Development and Testing... Computers are capable of magic, and we're just trying to make them our magic..." 59:35

Misc

"Branch Prediction: because we didn't have the words Machine Learning yet. Prediction and learning, of course they're linked. Kind of obvious in retrospect." 27:55

"Usually when you give people who are just learning computing root access, the first thing they do is totally destroy their computer." 53:40 #DontHaveKids

"You can have a talent bar for users (N.B.: sliding scale of computer capability) or you can make it really easy to fix stuff." 55:10 #HelpDesk
"[Re: Ransomware] Why is it possible to have all our data deleted all at once? Who is this a feature for?!... We have too many people able to break stuff." 58:25

Sunday, June 10, 2018

Postman Masterclass Pt. 2

During my second Postman meetup as part of the Las Vegas Test Automation group, we were able to cover some of the more advanced features of Postman. It's a valuable tool for testing RESTful services (stronger opinions on that also exist), and they are piling on features so fast that it is hard to keep track. If you're a business trying to add automation, Postman is easily the lowest barrier to entry to doing so. And with a few tweaks (or another year of updates) it could probably solve most of your API testing.

The meetup covered the Documentation, Mock Server and Monitor functionality. These are pieces that can fit in your dev organization to smoothe adoption, unroadblock, and add automation with very little overhead. Particularly, the Mock servers they offer can break the dependency on third party integrations quite handily. This keeps Agile sprints moving in the face of outside roadblocks. The Monitors seem like a half-measure. They gave a GUI for setting up external monitors of your APIs, but you still need Jenkins and their Newman node package to do it within your dev env. The big caveat with each of these is that they are most powerful when bought in conjunction with the Postman Enterprise license.  Still, at $20 a head, it's far and away the least expensive offering on the market.

Since the meetup, I've found a few workarounds for the features I wish it had that aren't immediately accessible from the GUI. As we know in testing in general, there is no one-size fits all solution.  And the new features are nice, but they don't offer some of the basics I rely on to make my job easier.  Here is my ever-expanding list of add-ons and hidden things you might not know about.  Feel free to comment or message me with more:

Postman has data generation in requests through Dynamic Variables, but they're severely limited in functionality. Luckily, someone dockerized npm faker into a restful service. This is super easy to slip stream into your Postman Collections to create rich and real-enough test data. Just stand it up, query, save the results to global variables, and reuse them in your tests.

The integrated JavaScript libraries in the Postman Sandbox are worth a fresh look. The bulk of my work uses lodash, crypto libraries, and tools for validating and parsing JSON. This turns your simple requests to data validation and schema tracking wonders. 

  • Have a Swagger definition you don't trust? Throw it in the tv4 schema validator. 
  • Have a deep tree of objects you need to be able to navigate RESTfully? Slice and dice with lodash, pick objects at random, and throw it up into a monitor. Running it every ten minutes should get you down onto the nooks and crannies.
This article on bringing the big list of naughty strings (https://ambertests.com/2018/05/29/testing-with-naughty-strings-in-postman/amp/) is another fantastic way to fold in interesting data to otherwise static tests. The key is to ensure you investigate failures. To get the most value, you need good logs, and you need to pay attention to your results in your Monitors.

If you have even moderate coding skills among your testers, they can work magic on a Postman budget. If you were used to adding your own libraries in the Chrome App, beware: the move to a packaged app means you no longer have the flexibility to add that needed library on your own (faker, please?).

More to come as I hear of them.

Saturday, June 2, 2018

Fixing Ford AC Head Controller Vacuum Problem

The AC on my land yacht (2009 Mercury Grand Marquis) has been in the fritz for a while. Last winter, it gradually stopped switching from max AC/recirculate (a necessary in Vegas), then got stuck on norm AC until it rested on Defrost/Floor. I was able to fix it with some basic troubleshooting, YouTube sleuthing, and two bucks in o-rings.

This shaky yet informative video by Ian Smith helped me diagnose it as a problem with vacuum only. The AC itself was fine. It blows cool air all day long. It just did so at the windshield. It couldn't be the blend-door actuator.

The same video showed me how to diagnose the vacuum problems. The black hose providing vacuum from the engine seemed fine: I was getting 20 inches of vacuum with the car turned on when I hooked up a bleed pump with a gauge (mine came from Harbor Freight, shown in the video). To test the actuators, all I had to do was hook a 'jumper' pipe from black to the other pipes. Each one seemed to hold air, and the actuators sprang to life once again. For the first time in a year, I had cold air blowing from the vents. The problem couldn't be in the lines. I pulled the controller head for a closer look.

The head itself is a bunch of electronics, a control panel, and one removable plate with four solenoids. The vacuum hoses come into this through a manifold, and the head controls trigger the solenoids to route vacuum from the black hose to the others. This triggers different actuators under the dash. Something was amiss in the manifold.

I returned to YouTube looking for rebuild instructions. I found this extremely helpful video from a Chicago mechanic. The solenoids contain an o-ring that dries out, wears out, and loses the ability to hold vacuum. I obtained close to the recommended o-rings from Lowes (#36, 5/16 OD, 3/16 ID, 1/16 thickness) as I was not willing to wait for Amazon. A little Oatey silicone lubricant made the tight squeeze work a little better. I found I had to seat the solenoid heads at least once before total reassembly. It was too difficult to do so at the end and fight with the other small parts at the same time. 45 minutes later, I had full control of my AC restored.

I can't believe it was this simple to fix the controller. I think I was intimidated by the AC (having spent $1500 last year to have the dealer redo the whole system from seals to refrigerant). I didn't want to break anything. A few targeted troubleshooting steps helped assuage any fears of irreparable harm, and now I have a comfortable cabin once again.

Tuesday, March 20, 2018

Behat AfterScenario, PHP Garbage Collection, and Singletons

In Behat, I added a singleton to our contexts to store things across scenarios, but I ran into trouble when trying to keep separation between my tests.  The storage object allowed me to be creative with builders, validators, and similar ways of reducing repetition and making the PHP code behind easier to read.  There was a problem though: it would randomly be cleared in the middle of a test.

The only thing I knew was the object would get cleared at relatively the same time.  I had a set of about 50 different tests in a single feature.  This would call an API multiple times, run validations on the responses, and then move on to the next test.  All the while, it would put information into the storage object.  The test would not just fail in the middle of a scenario, it would generally fail near the same part of a scenario every time.  it was timing, an async process, or something was clearing a logjam.

While designing the storage object, I had the bright idea to clear it with every scenario.  The singleton acts like a global variable, and a clear after each one would ensure data from one test didn't pop up in another.  To make sure i was running this at the last possible moment, I put the clear into the __destruct() method of my context class.  By putting the clear in the destructor, I gave PHP permission to handle it as it saw fit.  In reality, it sometimes left my scenario objects to linger while running the next (due to a memory leak or similar in Behat itself, or a problem in my code; I couldn't tell).

/**
 * Destructor
 */
public function __destruct()
{
 ApiContextStore::clear();
}


I first stopped clearing the store and the bugs went away.  Whew!  But how could I make sure I wasn't contaminating my tests with other data and sloppy design?  I tried two things:

1) gc_collect_cycles() forces the garbage collector to run.  This seems to have the same effect of stopping the crashes, but it was kind of a cryptic thing to do.  I had to put it in the constructor of the Context rather than something that made more sense.

/**
 * FeatureContext constructor.
 */
public function __construct()
{
 /**
 * Bootstrap The Store
 */
 gc_collect_cycles();
 ApiContextStore::create(); // Creates an instance if needed
}
2) Putting in an @AfterScenario test provided the same protection, but it ran, purposefully, after every test was complete.  I'm not freeing memory with my clear, so relying on garbage collection wasn't a priority.  I just needed it to run last.


/**
 * @AfterScenario
 *
 * Runs after every scenario */
public function cleanUpStore()
{
 ApiContextStore::clear();
}

http://php.net/manual/en/function.gc-collect-cycles.php
http://docs.behat.org/en/v2.5/guides/3.hooks.html

Monday, January 8, 2018

Fedora 27 Upgrade Issue - Solved by Removing yum-utils

Had a #meltdown upgrade to Fedora 27 stymied by a package conflict:

Error: Transaction check error:
  file /usr/bin/debuginfo-install conflicts between attempted installs of yum-utils-1.1.31-513.fc27.noarch and dnf-utils-2.1.5-1.fc27.noarch
...

I got around this by removing yum-utils.  It then allowed me to move on to the upgrade.  Got this idea from similar threads online.  Hoping anyone that runs into this after me will find this useful.

Here's to hoping the jump from 23 to 25 will be as significant as the one from 25 to 27.