Which physical law are you




















In this second example, PNNs learn classical mechanics, the fact that the frictional force is proportional to negative the velocity, and discover the same stable integrators based on the position Verlet method, all from the observational data.

We consider the emergence of Verlet style integrators from data remarkable. This family of integrators is the preferred choice for molecular dynamics simulations due to its stability. Unlike other algorithms such as the Runge—Kutta family or the first order Euler method, Verlet integrators are symplectic and time reversible This class of integrators has been long known, and proposed independently by several researchers over decades see Ref.

Importantly, we find more complex models that reproduce the data more accurately than PNN1 but do not exhibit time reversibility nor conserve energy. This shows that parsimony is critical to learn models that can provide insight into the physical system at hand and for generalizability. We stress that the equations of motion and an advanced integrator were obtained from observational data of the motion of a particle and the force—displacement relationship alone. We believe that, at the expense of computational cost, the force sub-net could be learned together with the integrators effectively learning the acceleration from large-enough dynamical datasets.

This assertion is based on the observation that on fixing some of the network parameters that result in a Verlet integrator, the remaining parameters and the force subnet can be learned from the observational data used above, see section S7 of the SM. To demonstrate the versatility and generalizability of PNNs, we now apply them to discover melting laws from experimental data. Our goal is to predict the melting temperature of materials from fundamental atomic and crystal properties.

Before feeding this data to PNNs, we perform a standard dimensionality analysis to use dimensionless inputs and output. From these fundamental quantities, we define four independent quantities with the dimensions of temperature:. Additional details on the preprocessing steps as well as network architecture, including custom activations, can be found in the section S8 of the SM.

Armed with dimensionless inputs and outputs, we use PNNs to discover melting laws. Varying the parsimony parameter in the objective function, Eq. These models are presented in Fig. The latter is defined as the sum of the second and third terms of the objective function Eq. PNN models represent various tradeoffs between accuracy and parsimony from which we can define a pareto front of optimal models see dashed line.

Melting laws discovered by PNNs. The red points show the celebrated Lindemann law, while the blue points show other models discovered. The black dotted line denotes the pareto front of models, with some of the models performing better than the Lindemann law while also being simpler. Three models are highlighted and labeled.

The PNN approach finds several simple yet accurate expressions. The simplest non-trivial relationship is given by PNN A, it approximates the melting temperature as proportional to the Debye temperature:.

This makes physical sense as the Debye temperature is related to the characteristic atomic vibrational frequencies and stiffer and stronger bonds tend to lead to higher Debye and melting temperatures. Next in complexity, PNN B adds a correction proportional to the shear modulus:. This is also physically sensical as shear stiffness is closely related to melting.

This fact is captured by the classic Born instability criteria 51 that associates melting to loss of shear stiffness. Remarkably, this law, derived using physical intuition in , is very close to, but not on, the optimal pareto front in accuracy-complexity space. Quite interestingly, this model combines the Lindemann expression with Debye temperature and bulk not shear modulus. This combination is not surprising given the expressions above, but the selection of bulk over shear modulus is not clear at this point and should be explored further.

In summary, we proposed parsimonious neural networks that are capable of learning interpretable physics models from data; importantly, they can extract underlying symmetries in the problem at hand and provide physical insight.

This is achieved by balancing accuracy with parsimony, an adjustable parameter is used to control the relative importance of these two terms and generate a family of models that are pareto optimal. We quantify parsimony by ranking individual activation functions and favoring fixed weights over adjustable ones. Future work should explore other measures of complexity such as the complexity of a polynomial expansion of the resulting PNN expression 52 , or the curvature of the PNN expression evaluated over the training data The combination of genetic optimization with neural networks enables PNNs to explore a large function space and obviate the need for estimating numerical derivatives or matching a library of candidate functions, as was done in prior efforts 17 , 18 , Additionally, PNNs perform complex composition of functions in contrast to sparse regression, which combine functions linearly.

The libraries of activation functions in our first examples of PNNs are relatively small and based on physics intuition, the application of PNNs in areas where less is known about the problem at hand requires more extensive sets of activation functions, at increased computational cost.

The state-of-art solutions PNNs provide two quite different problems attest to the power and versality of the approach. From data describing the classical evolution of a particle in an external potential, the PNN produces integration schemes that are accurate, conserve energy and satisfy time reversibility. Quite interestingly, the PNNs learn the non-trivial need to evaluate the force at the half step for time reversibility. The optimization could have learned the first order Runge—Kutta algorithm, which is not reversible, but it favored central-difference based integrators.

Furthermore, parsimony favors Verlet-type integrators over more complex expressions that describe the training data more accurately but do not exhibit good stability. We note that other high-order integrators are not compatible with our initial network, but these can easily be incorporated by starting with a more complex network.

The fact that such knowledge and algorithms can be extracted automatically from observational data has, however, deep implications in other problems and fields. This is confirmed with a second example that shows the ability of PNNs to extract interpretable melting laws from experimental data. We discover a family of expressions with varying tradeoffs between accuracy vs. To discover integration schemes, training data was generated using molecular dynamics simulations under an NVE ensemble, using the velocity Verlet scheme with a short timestep about a tenth of what is required for accurate integration , see section S1 of the SM for additional details.

Our test set is a separate trajectory with a different total energy. To discover novel melting laws, we queried the Pymatgen and Wolfram alpha databases for experimental melting temperatures. We obtained fundamental material properties such as volume and shear modulus by querying the Materials Project.

Additional details are provided in section S8 of the SM. We used populations of and individuals and a two-point crossover scheme and random mutations to evolve the population weights and activation functions For each generation, individual networks in the population are trained using backpropagation using the same protocols as for the feed-forward networks; only adjustable weights are optimized in this operation. The populations were evolved over 50 generations, additional details of the genetic algorithm are included in section S5 of the SM.

Simonyan and A. Krizhevsky, I. Sutskever, and G. Pereira, C. Burges, L. Bottou, and K. Weinberger Curran Associates, Inc.

Bengio, Y. A neural probabilistic language model. Carrasquilla, J. Machine learning phases of matter. Senior, A. Improved protein structure prediction using potentials from deep learning. Nature , Article Google Scholar. Meredig, B. Combinatorial screening for new materials in unconstrained composition space with machine learning. B 89 , Carrete, J. Finding unprecedentedly low-thermal-conductivity half-heusler semiconductors via high-throughput materials modeling.

X 4 , CAS Google Scholar. Bassman, L. Active learning for accelerated design of layered materials. NPJ Comput.

Kaufmann, K. Discovery of high-entropy ceramics via machine learning. Snyder, J. Finding density functionals with machine learning. Li, Z. Molecular dynamics with on-the-fly machine learning of quantum-mechanical forces.

Behler, J. Generalized neural-network representation of high-dimensional potential-energy surfaces. Jacobsen, T. On-the-fly machine learning of atomic potential in density functional theory structure optimization.

Raissi, P. Perdikaris, and G. Ling, J. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. Fluid Mech. Schmidt, M. Distilling free-form natural laws from experimental data. Science , 81 Rudy, S. Data-driven discovery of partial differential equations. Champion, K. Data-driven discovery of coordinates and governing equations. Harp, T. Samad, and A. Guha, Designing application-specific neural networks using the genetic algorithm, in Advances in Neural Information Processing Systems , pp.

Miller, G. Designing neural networks using genetic algorithms. ICGA 89 , — Google Scholar. Stepniewski, S. Pruning backpropagation neural networks using modern stochastic optimisation techniques. Neural Comput. Yao, X. They are not as interesting as those I have described, and do not deal exactly with the conservation of numbers.

The first one is change of scale. It is not true that if you build an apparatus, and then build another one, with every part made exactly the same, of the same kind of stuff, but twice as big, that it will work in exactly the same way.

I will give you some very simple examples to show how, knowing the law of conservation of energy and the formulae for calculating energy, we can understand other laws. In other words many other laws are not independent, but are simply secret ways of talking about the conservation of energy. The simplest is the law of the lever fig.

The great gods who play this chess play it very rapidly, and it is hard to watch and difficult to see. However, we are catching on to some of the rules, and there are some rules which we can work out which do not require that we watch every move. For instance, suppose there is one bishop only, a red bishop, on the board, then since the bishop moves diagonally and therefore never changes the colour of its square, if we look away for a moment while the gods play and then look back again, we can expect that there will be still a red bishop on the board, maybe in a different place, but on the same colour square.

This is in the nature of a conservation law. We do not need to watch the insides to know at least something about the game. But it makes a lot of other things right, like the conservation of momentum and other conservation laws, and very recently it has been directly demonstrated that such neutrinos do indeed exist.

How is it possible that we can extend our laws into regions we are not sure about? Why are we so confident that, because we have checked the energy conservation here, when we get a new phenomenon we can say it has to satisfy the law of conservation of energy? Because of the relation of mass and energy the energy associated with the motion appears as an extra mass, so things get heavier when they move.

Then, following the laws of physics, with all the movements and collisions, you could expect, and rightly, that if you look at the same picture later on it will still be bilaterally symmetrical. So there is a kind of conservation, the conservation of the symmetry character.

This should be in the table, but it is not like a number that you measure, and we will discuss it in much more detail in the next lecture. The reason this is not very interesting in classical physics is because the times when there are such nicely symmetrical initial conditions are very rare, and it is therefore a not very important or practical conservation law. At first it is standing still, say, in empty space, and then it shoots some gas out of the back, and the rocket goes forward.

The point is that of all the stuff in the world, the centre of mass, the average of all the mass, is still right where it was before. The interesting part has moved on, and an uninteresting part that we do not care about has moved back.

That is the sense in which we say that the laws of physics are symmetrical; that there are things we can do to the physical laws, or to our way of representing the physical laws, which make no difference, and leave everything unchanged in its effects.

It is this aspect of physical laws that is going to concern us in this lecture. Furthermore, these consequences are extendable into laws that we do not know.

For example, by guessing that this principle is true for the disintegration of a mu meson, we can state that we cannot use mu mesons to tell how fast we are going in a space ship either; and thus we know something at least about mu meson disintegration, even though we do not know why the mu meson disintegrates in the first place.

Too bad. Request Permissions Exam copy. Overview Author s Praise. Summary In these Messenger Lectures, originally delivered at Cornell University and recorded for television by the BBC, Richard Feynman offers an overview of selected physical laws and gathers their common features into one broad principle of invariance.

Share Share Share email. Adventures of a Curious Character , and other books. Times Literary Supplement.



0コメント

  • 1000 / 1000