!define TEST_SYSTEM {fit:A} part II 23
In 11/2008 I wrote the first part of this article but I really did not give the background on why I originally asked Bob to add this feature.
So why does FitNesse need to be able to run different parts of suites in different VM’s?
Imagine you are testing services registered on an ESB. That ESB allows deployment of services as EJB’s. There’s something important about that fact. By default, different EJB’s run with their own class loader. You can dig into the EJB spec if you’d like, but there’s roughly a hierarchy of 5 class loaders by the time you get to an executing EJB (can be more, could be less).
Why is this important? If two EJB’s make reference to the “same” JAR file, then each could get loaded by the same or a different class loader. If the JAR file is installed high in the class-loader hierarchy, then they will be loaded by the same class loader instance and therefore will be the same classes.
However, let’s assume that each EJB represents ones service. Furthermore, each EJB makes a reference to some JAR file directly, that is, it is not visible to a hierarchically higher class loader. Then that same JAR file will get loaded twice, once by each class loader (this really happens), and those exact same classes will be considered different.
That’s right, the exact same JAR file loaded by 2 different class loaders are treated as unrelated classes.
In reality, each deployed EJB is “complete” in that in includes the JAR’s it needs to execute. So the “same” jar – same in terms of file contents – exists twice in the system and is loaded by two class loaders.
Now, it gets a little worse. Those two EJB’s work off a “common” data model. Common in that the underlying business objects are common across a set of applications. However, the common data model is a moving target (it’s grown as needed). To manage complexity, not all services are forced to update to the latest version of the common data model at the same time (it’s not really a manageable solution – generally).
So now we have 2 (really several) EJB’s that refer to logically the same JAR but different versions. Luckily, this is no problem because each EJB has its own class loader, so it is OK for 2 EJB’s to refer to different versions of the same logical data model.
How does any of this have anything to do with FitNesse? When FitNesse executes a suite, by default all tests are executed in the same VM with one class loader. So if you have a suite of tests that test different services, each of which might be using a different version of the common data model, then the first service executed loads the common data model, which really isn’t exactly common. This causes tests to fail that would not fail if each text executed with its own class loader.
Rather than deal with writing custom class loaders, Bob instead make it possible to name a VM (see the title for the syntax). By doing this, Bob make it possible to effect a similar environment to that experienced by EJB’s by default.
FitNesse.Slim table table example and migration to a DSL 59
I’m working on some notes related to an upcoming presentation. You can see an example of using the Slim table table here: Table Table Example
It’s a work in process. If you see something that doesn’t quite make sense, please let me know.
OO is Irrelevant! 44
This got me to thinking because the SOLID principles are not about writing OO software. What follows is my original response, but it seemed to warrant a blog entry.One can comment critically on the “Employee” class example, though. Frankly, it IS bureaucratic. So either it’s a bad example and you have mispresented your principles, or Joel is right.
Since no one has the authority to define OO, I’ll even venture to say that “Employee” is not OO.
Something being OO or not OO is irrelevant. The original point of OO was an experiment in adding a new kind of scope to see what happens (go back to mid to late 60’s). Then it was to make it easier for kids to program (I’m thinking of Ala Kay and Smalltalk in the very early 70’s). Eventually it was to make more maintainable code.
Ultimately, doing X to get Y often promotes X as the original goal. We do OO to make maintainable code so we can deliver faster and respond to change. This eventually becomes we do OO and not doing is OO is bad.
Same can be said for agile, TDD, scrum, XP, pretty much anything …
There was another “worthy” comment from another poster:There is no executive summary that explains the SOLID principles well enough that you can comment on them critically.
The SOLID principles are about a goal. That goal is understanding what makes code better or harder to maintain. They come from several people. Bob wrote some of them, he collected others and he’s got even more (component coupling and cohesion principles, e.g. Reuse/Release Equivalence Principle). He experienced pain developing large systems in many languages including C and C++. These principles are meant to make our life easier, not to develop OO software. They do suggest using tools available in OO to enable building better software (e.g., the OCP and polymorphism – not just method overloading but selecting a method from message based on type of the receiver done by class-based OO languages, arguably the manifestation of polymorphism in prototype-based languages like JavaScript and self, are different mechanisms to accomplish the same effect).
If code is hard to work with, the SOLID principles are one way to identify problems. Using Martin’s code smells is another. You can go back to good old coupling and cohesion, code analysis tools, etc.
In the end, suggesting that OO is a goal is equivalent to creating UML diagrams because “the process” says so. It is a form of cargo culting. It just furthers the point that critiquing based on an executive summary is a problem.
One might argue that because something is hard to understand, therefore it is of little use. I have two responses to this:- I don’t think they are hard. They do take effort and lots of experience to really internalize, but most things worth learning are like that.
- An effective practitioner of any complex profession eventually has internalized thousands and thousands of rules, guidelines and principles such that s/he just “sees” (or in Fowler’s case “smells”) problems not seen by the novice. The SOLID principles are a mix of beginning, intermediate and ancillary rules that fit into the toolbox of effective practitioners.
Let the flames continue!
Using SliM in FitNess with .Net 71
If you’d like a quick introduction to using SliM with .Net, Mike Stockdale is working on an implementation and here’s a quick writeup of what you need to do to get it working: Using Slim.Net in FitNesse
!define TEST_SYSTEM {fit:A} 55
Uncle Bob has been busy with FitNesse lately. If you have been following him on Twitter or if you read his blog post on the subject, then you are aware of his work on Slim.
This post, however, is not about that. It is about something he did to make it possible to execute different tests in different VM’s.
By default, when you click on the test or suite buttons to run tests, FitNesse finds the tests it will run and executes them in a single VM.
If you want to select a particular test runner in FitNesse, you can add the following to a page:If you do not define this variable, then fit is the test system used to execute tests, making the first example redundant…almost.
These variable definitions are inherited. FitNesse will search up the page hierarchy to find variable definitions. If you do not define TEST_SYSTEM anywhere in a page’s hierarchy, then that test will be executed with fit. However, if any of the pages above the current page changed the runner to slim, then Slim will be the test runner.
The other thing that you can do is add a “logical-vm name” to the end of the runner. Here are two examples:On some page
On a different page
All tests under the page containing the first define run in a vm with the logical name vm1. The same is true for vm2.
By default (i.e., you have not defined TEST_SYSTEM anywhere), all tests are run in the same vm. More precisely:- When you click the test button, all tests executed as a result of that button click run in one VM.
- When you click the suite button, all tests executed as a result are executed in the same VM.
As soon as you introduce the TEST_SYSTEM variable, the tests might execute in the same VM or different VM’s.
Conceptually, there’s a default or unnamed VM under which all tests execute. As soon as a page contains a TEST_SYSTEM with the added :VMName syntax, that page and all pages hierarchically below it run in a different VM.
If for some reason you want to have two unrelated page hierarchies execute in the same VM, you can. Define the TEST_SYSTEM variable with the same logical VM name.
Why did he add this feature
I asked him to. He was working on that part of FitNesse so I figured he’d be able to add the feature for a project I’m working on.It has to do with service-level testing of a SOA-based solution. If you’re interested in hearing about that and the rationale for adding this feature to FitNesse, let me know in the comments and I’ll describe the background.
I'm glad that static typing is there to help... 13
The Background
A colleague was using FitNesse to create a general fixture for setting values in various objects rendered from a DTD. Of course you can write one per top level object, but given the number of eventual end-points, this would require a bit too much manual coding.This sounds like a candidate for reflection, correct? Yep, but rather than do that manually, using the Jakarta Commons BeanUtils makes sense – it’s a pretty handy library to be familiar with if you’re ever doing reflective programming with attributes.
package com.objectmentor.arraycopyexample; import static org.junit.Assert.assertEquals; import org.junit.Test; public class ArrayPropertySetterTest { @Test public void assertCanAssignToArrayFieldFromArrayOfObject() { Object[] arrayOfBars = createArrayOfBars(); Foo foo = new Foo(); ArrayPropertySetter.assignToArrayFieldFromObjectArray(foo, "bars", arrayOfBars); assertEquals(3, foo.getBars().length); } private Object[] createArrayOfBars() { Object[] objectArray = new Object[3]; for (int i = 0; i < objectArray.length; ++i) objectArray[i] = new Bar(); return objectArray; } }For completeness, you’ll need to see the Foo and Bar classes:
Bar
package com.objectmentor.arraycopyexample; public class Bar { }
Foo
package com.objectmentor.arraycopyexample; public class Foo { Bar[] bars; public Bar[] getBars() { return bars; } public void setBars(Bar[] bars) { this.bars = bars; } }
So an instance of a Foo holds on to an array of Bar objects; and the Foo class has the standard java-bean-esque setters and getters.
With this description of how to set an array field on a Java bean, let’s get this to actually work.
First question, how do you deal with arrays in Java? Sounds trivial, right. If you don’t mind a little pain, it’s not that bad… By dealing, I mean what happens when someone has given you an array created as follows:Object[] arrayOfObject = new Object[3]:Note that this is very different from this:
Object[] arrayOfBars = new Bar[3]:
The runtime type of these two results is different. One is array of Object; the other is Array of Bar.
This will not work:Bar[] arrayOfBar = (Bar[])arrayOfObject;This will generate a runtime cast exception. You cannot simply take something allocated as an array of objects and cast it to an array of a specific type. NO, you have to do something more like the following:
Array.newInstance(typeYouWantAnArrayOf, sizeOfArray);
That’s not too bad, right? You can then either use another method on the Array class to set the values, or you can cast the result to an appropriate array.
That’s enough information to write a generic method to copy from an array of Object to an array of a subtype of Object:public static Object[] copyToArrayOfType(Class destinationType, Object[] fromArray) { Object[] result = (Object[])Array.newInstance(destinationType, fromArray.length); for(int i = 0; i < fromArray.length; ++i) result[i] = fromArray[i]; return result; }This is a bit unruly because the caller still needs to cast the result:
Object[] arrayOfObject = new Object[] { new Foo(), new Foo(), new Foo() }; Foo[] arrayOfFoo = (Foo[])copyToArrayOfType(Foo.class, arrayOfObject);We can get rid of this cast if we use generics:
public static <T> T[] copyToArrayOfType(Class<T> destinationType, Object[] fromArray) { T[] result = (T[])Array.newInstance(destinationType, fromArray.length); for(int i = 0; i < fromArray.length; ++i) result[i] = (T) fromArray[i]; return result; }This doesn’t quite work because of type erasure, so to get this to “compile cleanly – no warnings”, you’ll need to add the following line above the method:
@SuppressWarnings("unchecked")
That’s just me telling the compiler I really think I know what I’m doing.
With this change, you can now write the following:Object[] arrayOfObject = new Object[] { new Foo(), new Foo(), new Foo() }; Foo[] arrayOfFoo = copyToArrayOfType(Foo.class, arrayOfObject);
The original problem was to take an array of Object[] and set it into a destination object’s attribute. Now we can create an array with the correct type, what next?
There are several things still remaining:- Given the name of the property, determine its underlying array type.
- Create the array (above)
- Assign the value to the underlying field
- Do some suficient hand-waving to handle exceptions
Here are solutions for each of those things:
Determine underlying type
PropertyDescriptor pd = PropertyUtils.getPropertyDescriptor( destObject, fieldName); Class<?> destType = pd.getPropertyType().getComponentType();
Create the array
Object[] destArray = copyToArrayOfType.(destType, fromArray);
Assign the value
PropertyUtils.setSimpleProperty(destObject, fieldName, destArray);Here’s all of that put together and simply capturing all of the checked exceptions (that’s a whole other can of worms):
public static void assignToArrayFieldFromObjectArray(Object destObject, String fieldName, Object[] fromArray) { try { PropertyDescriptor pd = PropertyUtils.getPropertyDescriptor(destObject, fieldName); Class<?> destType = pd.getPropertyType().getComponentType(); Object[] destArray = copyToArrayOfType(destType, fromArray); PropertyUtils.setSimpleProperty(destObject, fieldName, destArray); } catch (Exception e) { throw new RuntimeException(e); } }
That’s all it takes to copy an array and then set the value in the field of a destination object.
Simple, right?
Sometimes static (and strong) typing can get in the way. This is one of those cases. Luckily, you can write this one and use it all over. Maybe it’s a part of the BeanUtils that I was unable to track down (probably).
Markup's Where It's At 15
My first word processor ran in under 8K on a Commodore 64 and that included playing pomp and circumstance. It was adequate. It was markup based. When you wanted to view your work, you previewed the results. (When my friend John got a spell checker add-on, I was really impressed!)
Next, I started using Apple Write (I think that was the name). All of those dot commands on the left margin. I got to the point where I didn’t need to preview, I knew what my stuff was going to look like.
After that, I worked with Word Star on a CP/M emulator running on Apple IIE’s. I actually liked Word Star. I still remember the ^k madness.
But then there was WordPerfect. To this day it is my favorite Word Processor (that’s WordPerfect 4.2 for DOS). I taught classes on it while I was working on a CS degree. I had the 40 function keys memorized and even the F11 and F12 keys – when I had keyboards with F11 and F12.
Here’s the thing, WordPerfect was logical. There were three general categories of markup (codes) and those three categories, when added, went to specific places on the line (home back-arrow, home-home back-arrow, home-home-home back-arrow) and it was related to the scope of the thing. And you could see them with revel codes, Alt-F3. Sure it took a bit of learning but it was expert friendly.
At the same time that Ultima IV came out, I decided to write a book for one of the classes I taught. The first week some friends of mine and I finished Ultima IV – we did not stop playing, but played in shifts. I finished the outline that week. The next week I wrote the first version of a book. I still have one physical copy of that book.
I did this work (Ultima IV and WordPerfect) on my Leading Edge, 2 – 5.25” floppy drive system. The first version was about 170 pages, the final was around 350 pages. Printing took about 3 hours. 1 hour to merge the master document, update the TOC, generate the index, 2 hours to print on my HP DeskJet (the first one).
WordPerfect never crashed (4.2 was solid, 4.0 no so much), and I knew what the resulting print was going to look like before it printed. I groked it. I had lots of formatting, graphics (mostly had-drawn bitmaps), numbered items, tutorials, chapters, page numbers, you know, stuff.
In the mid 80’s I started using Microsoft Word. I didn’t like it then and to this day I do not like it. Anybody who has ever tried working with numbered lists across versions understands what I’m talking about (this feature worked perfectly in WordPerfect). I can get it to work. And I know that when to grab the hidden character at the end of a paragraph (or not) when copying – which leads to selecting paragraphs in a particular direction. I work to avoid using it as much as I can, which is getting better all the time.
I’m not a big fan of complex formatting simply because I have not found anything so clear as WordPerfect. The later versions didn’t do it for me.
I am a fan of using Wiki’s because part of the whole reason to use one is to keep away from messing around with formatting and get to writing.
Lately I’ve been learning CSS. It’s powerful. It’s also magic. Working with flow layouts reminds me of the grid-bag-layout of X fame (or was that Motif – certainly Java has one as well). I like what I can do with it.
I like that I can get the basic content looking decent and then when I’m in the mood, I can attempt to cast spells to make the stuff I’ve written look better. So far, I’m a rank amateur.
I probably should be spending more time re-learning JavaScript (which is a cool language – I love prototype-based languages – first one I used was Self). But I’ve been working on some tutorials for Ruby, one on practicing TDD and a few on BDD.
These are low-level. For example, I have not used the story tests of RSpec yet. That’s the next tutorial – I still have the second BDD tutorial to finish. I already have stories for the next tutorial, so writing story tests and working my way into a solution should be a blast.
If you’d like to have a look, check out: http://schuchert.wikispaces.com/ruby.tutorials
These are works in progress!
To bring the train track back into a complete oval – we want the train to run after all – I came across something from James Martin (http://martinfowler.com/bliki/DuplexBook.html) and it got me to thinking. In these tutorials, I have a lot of “sidebars” where I give a little more context. Those sidebars are scattered throughout.
In 2001 I took a writers workshop with Jerry Weinberg (http://geraldmweinberg.com/Site/Communication.html) and in that workshop we had one exercise involving taking paragraphs from other people on various subjects and attempting to weave a common theme throughout.
Martin’s idea of a duplex book and Weinberg’s writing exercise led me to refactor all of those tutorials such that the sidebars are included files. This leads to a summary of all of the sidebars so far: http://schuchert.wikispaces.com/ruby.sidebars
I think I’m going to take Martin’s message to heart and have 2-views into this material; the sidebars and the tutorials. I’m going to try that exercise I learned from Weinberg and see where it takes me.
How does this relate? As I was refactoring, I made sure to put a span around the titles so I could have consistent, and externally defined formatting in my CSS. So it’s loosely related. I’ve been working with spans and included files a bunch this week.
So what do you think? Is the name of this blog appropriate?
Goodnight!
The Physical Realm Matters 26
In 1997 I took a class called SEM offered by Weinberg and company. It was a rather profound learning experience; I’m probably not aware of much of what I learned. However, there’s one thing I learned that has been coming up lately.
The training was somewhat like an un-conference. We got together and as a group formed the schedule. It was hell. Somehow I ended up at the flip-chart taking notes. After I don’t know how long – certainly an hour, probably more – maybe much less, I wanted out. People were throwing out suggestions and I kept asking the group if anybody would like to take over. I pointed the marker even. I never got out of that role (at the course, that is).
Jerry took me aside afterwards and suggested that the next time I am in that situation, simply put the marker down and physically move.
Stupid, right? I should have just done that, right? Well the best time to plant a tree is 100 years ago, right now is the next best time. I have not forgotten that lesson. In fact, I often do just that and I often train people to do the same thing. It is absolutely amazing what moving can do. (It often increases the congruence between your words and your actions.)
What about pair programming? More often than not, workstations are not conducive to working in pairs. One person is “in front” and the other is over the shoulder, off to the side, whatever. Or, worse yet, the monitor is in the corner. The setup needs to allow for equal access to the keyboard, mouse and monitor. If not, then you are not peers, there’s the driver and the other person. That’s just a fact. The physical location is of first-order importance.
Here’s another example. In the scrum meetings you’ve attended, are people talking to each other or is everybody reporting to the manager? More often than not, I observe a status meeting with everybody standing up.
This week I observed a profound transformation that I hope sticks (OK, I thought it was important, we’ll see). I’ve been working with a team that was really doing the stand-up, activity-oriented, daily status report – cargo culting scrum meetings.
Last week we had a “come to your deity” meeting. The result, while painful, was useful. We went from having mostly activities in the backlog to a much better mix of deliverables and some activities (I’m not looking for perfection, just striving for it – perfection is a journey, not a destination). I came back this week (on Wednesday) and I was unimpressed with the wall of tasks – or rather the seeming lack of progress.
Wednesday we created a sprint burndown, calculated both by number of tasks and estimated ideal days (I prefer story points, but this team is using ideal time for a good reason).
Based on that, the team was simply going to be way too late. We published that chart and next day things were looking better. By idea days, we were going to be late but closer. Tasks were just about there.
Then today two things happened. During the scrum, better than half of the reporting was on what tasks had been completed and what tasks were on the table. It was simply amazing.
Another thing that I noticed was that the developers seemed to be talking to each other. Not 100%, but rather than a nearly visual conduit between the manager or scrum master and the person reporting, it was more of a broadcast. The quasi-circle stayed such; it did not become a line between reporter and manager, followed by another line.
I mentioned it to a few people. I’m not sure the team noticed it. I was really impressed with the team.
Of course I make a huge mistake. I did not tell them this right after the meeting so I missed a prime reinforcement opportunity.
The difference was palpable.
Years ago I learned the “trick” of not sitting “on a side” in meetings. If I was in a meeting with colleagues and clients, I tried to make sure to mix it up. This is another example of the same idea.
Bottom line, you often have a profound tool at your disposal that you can use. That’s your current location. If you need to mix things up then change from standing to sitting, or one side of the room to another. If you are at the front of the room, move to the back of the room. If you’re cowering in a corner, consider moving up to the table.
It really makes a difference.
What do you think?
TDD is how I do it, not what I do 48
“Do not seek to follow in the footsteps of the men of old; seek what they sought.” ~Basho
That quote resonates with me. I happend across that a few days after co-teaching an “advanced TDD” course with Uncle Bob. One of the recurring themes during the week was that TDD is a “how” not a “what”. It’s important to remember that TDD is not the goal, the results of successfully applying TDD are.
What are those results?
- You could end up writing less code to accomplish the same thing
- You might write better code that is less-coupled and more maleable
- The code tends to be testable because, well, it IS tested
- The coverage of your tests will be such that making significant changes will not be too risky
- The number of defects should be quite low
- The tests serve as excellent exampls of how to use the various classes in your solution
- Less “just in case” code written, which typically doesn’t work in those cases that they targeted
Right now I do not know of a better way to accomplish all of these results more effectively than practicing TDD. Even so, this does not elevate TDD from a “how” to a “what.” TDD remains a technique to accomplish thigns I value. It is not a self-justifying practice. If someone asks me “why do we do it this way”, saying something like “we practice TDD” or “well you don’t understand TDD” is not a good answer.
We had an interesting result during that class. One group was practicing Bob’s three rules of TDD (paraphrased);- Write no production code without failing tests
- Write only enough test code so that it fails (not compiling is failing)
- Write only enough production code to get your tests to pass.
- S.O.L.I.D.
- F.I.R.S.T.
- Separation of Concerns (Square Law of Computation)
- Coupling/Cohesion
- Protected Variation
- ...
TDD is a means to an end but it is the end we care about. What is that end? Software that has few defects and is easy to change. Tests give us that. Not testing generally does not give us that. And testing in a common “QA over the wall” configuration typically does not cut it.
Since I do not know how to so easily produce those results in any other way. TDD becomes the defacto means of implementation for me. That doesn’t mean I should turn a blind eye to new ways of doing things. In lieu of any such information, however, I’ll pick TDD as a starting point. This is still a “how” and not a “what”.
It turns out that for me there are several tangible benefits I’ve personally experienced from practicing TDD:- Increased confidence in the code I produce (even more than when I was simply test infected)
- Less worrying about one-off conditions and edge cases. I’ll get to them and as I think about then, they become tests
- Fun
Fun?
Yes I wrote fun. There are several aspects of this:- I seem to produce demonstrable benefits sooner
- I actually do more analysis throughout
- I get to do more OO programming
Demonstrable Benefits Sooner
Since I focus on one test at a time, I frequently get back to running tests. I’m able to see results sooner. Sure, those results are sometimes disjoint and piecemeal, but over time they organically grow into something useful. I really enjoy teaching a class and moving from a trivial test to a suite a tests that together have caused what students can see is a tangible implementation of something complex.More Analysis
Analysis means to break into constituent parts. When I practice TDD, I think about some end (say a user story or a scenario) then I think about a small part of that overall work and tackle it. In the act of getting to a test, I’m doing enough analysis to figure out at least some of the parts of what I’m trying to do. I’m breaking my goal into a parts, that’s a pretty good demonstration of analysis.More OO
I like polymorphism. I like lots of shallow, but broad hierarchies. I prefer delegation to inheritance. But often, the things I’m writing don’t need a lot of this – or so it might seem.When I try to create a good unit test, much of what I’m doing is trying to figure out how the effect I’m shooting for can be isolated to make the test fast, independent, reliable … To do so, I make heavy use of test doubles. Sometimes I hand-roll them, sometimes I use mocking libraries. I’ve event used AOP frameworks, but not nearly as extensively.
Doing all of this allows me to use polymorphism more often. And that’s fun.
Conclusion
Am I wasting time writing all of these tests? Is my enjoyment of my work an indication that I might be wasting the time of my product owner?Those are good questions. And these are things you might want to ask yourself.
Personally, I’m pretty sure I’m not wasting anyone’s time for several reasons:- The product owner is keeping me focused on things that add value
- Short iterations keep me reigned in
- I’m only doing as much as necessary to get the stories for an iteration implemented
- The tests I’m writing stay passing, run quickly and over time remain (become) maintainable
Even so, since TDD is a how and not a what, I still need to keep asking myself if the work I’m doing is moving us towards a working solution that will be maintainable during its lifetime.
I think it is. What about you?
It's all in how you approach it 10
I was painting a bedroom over the last week. Unfortunately, it was well populated with furniture, a wall-mounted TV that needed lowering, clutter, the usual stuff. Given the time I had available, I didn’t think I’d be able to finish the whole bedroom before having to travel again.
I decided to tackle the wall with the wall-mounted TV first, so I moved the furniture to make enough room, taped just that wall (but not the ceiling since I was planning on painting it) and then proceeded to apply two coats of primer and two coats of the real paint. I subsequently moved around to an alcove and another wall and the part of the ceiling I could reach without having to rent scaffolding.
I managed to get two walls done and everything moved back into place before I left for another business trip. My wife is happy because the bedroom looks better. I did no damage and made noticeable progress. I still have some Painting to do (the capital P is to indicate it will be a Pain). I eventually have to move the bed, rent scaffolding, and so on. That’s probably more in the future than I’d prefer, but I’ll do it when I know I have the time and space to do it.
Contrast this to when we bough the house back in March. I entered an empty house. I managed to get two bedrooms painted (ceilings included) and the “grand” room with 14’ vaulted ceilings. I only nearly killed myself once – don’t lean a ladder against a wall but put the legs on plastic – and it was much easier to move around. I had a clean slate.
Sometimes you’ve got to get done what you can get done to make some progress. When I was younger, my desire to finish the entire bedroom might have stopped me from making any progress. Sure, the bedroom is now half old paint and half new paint, but according to my wife it looks better – and she’s the product owner! I can probably do one more wall without having to do major lifting and when I’m finally ready to rent the scaffolding, I won’t have as much to do. I can break down the bed, rent the scaffolding and then in one day I might be able to finish the remainder of the work. (Well probably 2 days because I’ll end up wanting to apply 2 coats to the ceiling and I’ll need to wait 8 hours).
Painting is just like software development.