• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 24th, 2023

help-circle



  • Backwards compatibility - yes I agree, it’s quite good at it.

    Hardware specific issues for any OSes - disagree. For windows that’s 80-90% done by the hardware manufacturer’s drivers. It’s not through an effort from Microsoft whether issues are fixed or not. For Linux it’s usually an effort of maintainers and if anything, Linux is famous for supporting old hardware that windows no longer works with.

    But the point I was making is not to say Linux or osx is better than windows or vice versa, it’s that windows holds by far the largest market share in desktops and neither of the alternatives are really drop-in replacements. So in the end they have no pressure on them to improve UX since it’s infeasible to change OS for the majority of their users at the moment.


  • Aside from the effort required others have mentioned, there’s also an effect of capitalism.

    For a lot of their tech, they have a near-monopoly or at least a very large market share. Take windows from Microsoft. What motivation would they have to fix bugs which impact even 5-10% of their userbase? Their only competition is linux with its’ around 4(?)% market share and osx which requires expensive hardware. Not fixing the bug just makes people annoyed, but 90% won’t leave because they can’t. As long as it doesn’t impact enterprise contracts it’s not worth it to fix it because the time spent doing that is a loss for shareholders, meanwhile new features which can collect data (like copilot for example) that can be sold generate money.

    I’m sure even the devs in most places want to make better products and fight management to give them more time to deliver features so they can be better quality - but it’s an exhausting sharp uphill battle which never ends, and at the end of the day the person who made broken feature with data collector 9000 built in will probably get the promotion while the person who fixed 800 5+ year old bugs gets a shout-out on a zoom call.


  • I haven’t used tailscale to know how well it works but as a current zerotier user I’ve been considering moving away from it.

    I actually love the idea and it’s super simple to set up but has some very annoying pitfalls for me:

    1. It’s a lot of “magic”. When it fails to work the zerotier software gives you very little information on why.
    2. The NAT tunneling can be iffy. I had it fail to work in some public WiFis, occasionally failed to work on mobile internet (same phone and network when it otherwise works). Restarting the app, reconnecting and so on can often help but it’s not super reliable IMO.
    3. Just recently I’ve had to uninstall the app restart my Mac, reinstall the app to get it to work again - there were no changes that made it stop, it just decided it’s had enough one day to the next and as in point 1, it doesn’t tell you much over whether it’s connected or not.

    Pretty much all of the issues I’ve had were with devices that have to disconnect and re-connect from the network and/or devices that move between different networks (like laptop, phone). On my router, it’s been super stable. Point is, your mileage may vary - it’s worth trying but there are definitely issues.


  • Would you accept a certificate issued by AWS (Amazon)? Or GCP (Google)? Or azure (Microsoft)? Do you visit websites behind cloudflare with CF issued certs? Because all 4 of those certificates are free. There is no identity validation for signing up for any of them really past having access to some payment form (and I don’t even think all of them do even that). And you could argue between those 4 companies it’s about 80-90% of the traffic on the internet these days.

    Paid vs free is not a reliable comparison for trust. If anything, non-automated processes where a random engineer just gets the new cert and then hopefully remembers to delete it has a number of risk factors that doesn’t exist with LE (or other ACME supporting providers).



  • I have no experience with this, but happened to have seen an interview with Ludwig Minelli, the founder of Dignitas (an organisation for assisted death). The man is 90+ and still fighting for this right. I believe I saw it in a video format, but I think this was the interview - I think it’s worth a read.

    I’d suggest you look up the contact for the various organisations and reach out with your situation and questions to see what they say. They’re likely to be much better sources of information.


  • I don’t know if there are agencies focussing on this, but in general it probably comes down to the company more than the agency. Probably worth filtering for companies offering flexible hours in the description

    I would say at the moment the IT job market is incredibly competitive for candidates, so it might be even more difficult to find truly flex roles when they can so easily find 100s of people who just work regular hours.

    On your last question: I’ve been a hiring manager in 2 companies (although in the UK) for software engineers and adjacent roles (like devops, platform, QA) and I would not care whether someone needs equipment. In the big scheme of things spending $800 for a monitor, keyboard and mouse is not even a drop in the bucket for the cost of an employee. What I would want to know is how do you work in a team in your situation and what arrangement can we do where you have a good experience, but other people in the company can still count on you. E.g. if you are working on a project and an issue pops up that’s blocking others from progressing and we need you to discuss, but you’re having a bad day and not working, what are the options you can offer? Or what if you get blocked when everyone else is asleep so you can’t progress?

    I think being prepared and upfront about this in an early stage of interviewing would be ideal, it signals that you have thought about others around you and also weed out any companies who aren’t willing to make this arrangement work. That being said, as above it’s a very competitive market right now so chances are pretty slim (at least in the UK).

    Also keep in mind once you look at companies who hire from abroad, you’re now also competing with (comparably) cheap labour from developing countries, who will likely agree to much worse terms.

    Edit: one thing I forgot, you may have the option to be your own boss (depending on your skill level) and freelance on a project basis rather than on a per-day basis.



  • I wonder if this will also have a reverse tail end effect.

    Company uses AI (with devs) to produce a large amount of code -> code is in prod for a few years with incremental changes -> dev roles rotate or get further reduced over time -> company now needs to modernize and change very large legacy codebase that nobody really understands well enough to even feed it Into the AI -> now hiring more devs than before to figure out how to manage a legacy codebase 5-10x the size of what the team could realistically handle.

    Writing greenfield code is relatively easy, maintaining it over years and keeping it up to date and well understood while twisting it for all new requirements - now that’s hard.


  • I think I misunderstood your problem, I assumed the issue was the volume mounts and after testing it I was indeed wrong - the docker cli now accepts relative paths so your original command does the same as what I suggested. After re-reading your issue I have a different idea of what’s wrong, but would have to see your dockerfile (or for you to confirm) to be sure.

    Do you add 10f.py to the docker image when you build it and do you specify the command/entrypoint in the Dockerfile? There are possibly to issues I can think of with how you do that (although considering the docker compose works it’s probably the 2nd):

    1. You do add it and you add it to /data in the image - when you mount a volume over it would make the script no longer exist in the container.
    2. You do add it and it’s not in /data - in this case the issue with running docker run -v ./:/data -w /workdir tenfigers_10f:v1 10f.py is the last bit - you override the command which makes it try to look for it at /data/10f.py, if you omit it the last part (10f.py) it should run whatever the original command was and assuming you set the cmd/entrypoint correctly in the Dockerfile it should see /data as ./ in python.

    (Also when you run it with the CLI you might want to add -it --rm as well to the docker command otherwise it won’t really behave similarly to a regular command)


  • It works in docker compose because compose handles relative paths for the volumes, the docker CLI doesn’t.

    You can achieve this by doing something like

    docker run -v $(pwd):/data ...
    

    pwd is a command that returns the current path as an absolute path, you can just run it by itself to see this. $() syntax is to execute the inner command separately before the shell runs the rest of it. (Same as backticks, just better practice)

    I imagine that wouldn’t work on windows, but it would on either osx, Linux or wsl.

    Generally speaking, if you need the file system access and your CLI requires some setup, I’d recommend either writing it in a statically compiled language (e.g. golang, rust) or researching how to compile a python script into an executable.

    If you’re just mounting your script in the container - you’re better off adding it directly at build time.




  • Personally, I’ve had an experienced manager and took great inspiration from him.

    A few things I fell into:

    • it was a lot faster for me (I.e. experienced senior dev with context knowledge) to finish a task than for me to assign it to someone less experienced who has to learn the context and takes 5x as long to do it, with lots of help needed from me still. This yielded me not building up my team either in experience or knowledge.
    • I assumed deadlines I got told were set in stone and my job was to meet them. This made business-y people happy. It made everyone else (including me) miserable. I had to learn to say no and push back, it very much changes between companies but most of the time I found it to be a negotiation and either the deadline could move or I had to argue to exclude things from the scope to make the deadline reasonable.
    • on the above, everything takes at least 3-5x as long as I think it takes. If things finish early, great time to give my team some slack, add in additional QA work like extending tests or repay some tech debt. Delivering something early gives a pat on the back for us but no discernible benefit to the team.
    • every time someone said “you’ll have time to write tests/repay tech debt/upskill later once X is shipped” it never came true. Those things have to be built into delivery scopes, and it’s a constant battle - if you don’t do this, nobody else will.

    I’m sure there were other things too, but these are the ones I mainly recall. Talk to your team, ask for feedback. Every team, project and company are different - you’ll have to adapt.



  • Same as others, convenience. You can entirely live without it, but after some learning curve it’s not much to maintain.

    I’ve got opening sensors on all doors and windows so my heating turns off if something is open for a few minutes.

    I’ve got a dark hallway with some movement sensors and smart bulbs so the lights can turn on when someone walks there, with the lights being dimmed if it’s late at night or not turning on if it’s super late or the luminosity sensor considers it already usable (e.g. on sunny days when there’s enough light bleeding in)

    I’ve got smart bulbs in most rooms we use a lot which change the color temperature from warm to cold to warm over the course of the day depending on the sun position/time (it’s a dark country, we often need lights even during the day, especially during winter)

    All in all, for me it was definitely worth the price and the investment, I’d not want to go back to not having them but I imagine for someone who hasn’t experienced it, it might seem superfluous or gimmicky.