When it comes to new space radiation testing, some space companies choose to ignore it, and that's OK! Here I'll compare why not testing can be acceptable and show some of the alternative approaches options companies choose.
In some sections of the industry this is not a popular opinion, but I believe there are two good reasons to take the 'do nothing' approach:
Short-term reasons: Something is a short duration demonstration, done quickly on a budget - This reasoning is similar to why hosted payloads can be a great option for early stage companies. Prove the hardware works or prove the business case - then transition to a longer term approach.
Long-term reasons: For longer duration missions there is a balance of cost, schedule and risk that makes radiation testing unattractive. In this scenario, your business case has to be able to accept a reasonable probability of an early failure.
Despite not doing any testing, there are other things that can be done to improve radiation tolerance:
If you’re procuring third party hardware/satellite - select something with heritage even if it doesn’t have radiation test data.
Sensible selection of critical components - There is a ton of public test data for critical COTS parts. For example, no one should be selecting a microcontroller or FPGA without published test data because there is so much available. NSREC and RADECS conference proceedings are amongst the best.
Limiting the use of complex parts - There are some great voltage regulators that are incredibly complicated, where possible it's best to avoid things like this if they haven’t been tested. The more complex the IC the more components they have inside that can have issues.
Other standard mitigation techniques - Signal filtering, ECC, watchdogs, golden memory, latch-up detection, health checks, redundancy, de-rating of power devices.
In most cases, not testing is too risky. The figure below compares some of the alternative options that companies choose.