Revert an optimization to optimize.py that sometimes breaks parallel optimization.
It looks like a commit @cbc94b93 has broken parallel optimization for some calculators:
--- a/ase/optimize/optimize.py
+++ b/ase/optimize/optimize.py
@@ -181,21 +181,24 @@ class Optimizer(Dynamics):
return (forces**2).sum(axis=1).max() < self.fmax**2
def log(self, forces):
+ if self.logfile is None:
+ return
fmax = sqrt((forces**2).sum(axis=1).max())
This unfortunately breaks parallel optimization with some calculators, since on all cores except the master the function returns here, and later the master calls atoms.get_potential_energy(). Some calculators will then return a cached value, but if the calculator does not have a cashed value then a calculation is started on one core but not on all other cores, causing a synchronization problem and a lock-up. This affects Asap which needs to communicate to decide if the cashed value can be reused.
I will submit a merge-request reverting most of @cbc94b93 on ase/optimize/optimize.py while keeping the addition of the force_consistent flag, but want to mention it here in case that breaks anything else. Please protest in that case :-)
Here is a link to the diff I will revert: https://gitlab.com/ase/ase/commit/cbc94b93043b90fa284920c0412f8d2102e1de81#201b79af7d56c4d20c4b58814a15b0399dd67888
I will only revert in optimize.py, and preserve this change:
fmax = sqrt((forces**2).sum(axis=1).max())
- e = self.atoms.get_potential_energy()
+ e = self.atoms.get_potential_energy(
+ force_consistent=self.force_consistent)
T = time.localtime()
Best regards
Jakob