Teaching computational skills to undergraduate physics students, much like teaching them multivariable calculus or linear algebra, amounts to gifting them with tools of analysis that are useful in every physics course they might take. Computation allows students to efficiently generate families of solutions to problems that they could solve analytically, solve problems with no analytical solution, and visualize both types of solution to build intuition and understanding. Moreover, the endeavor of using a computer to solve a physics problem confronts students with a host of elements that we hope they will keep in mind as they become effective problem- solvers: modeling with appropriate approximations and well-chosen variables, visualizing the results of calculations, communicating results to others together with appropriate error analyses, ... The liberal arts curriculum at Swarthmore currently requires, though not an entire course on Computational physics (which is offered as an elective) , a laboratory section of a required course on Mathematical Physics. In this talk, we will describe the way that computation has, historically, fit into our liberal arts curriculum; both the way it has been taught and the way it is utilized by students in seminars (which constitute our upper level curriculum) and in research (which is required of all Honors majors in physics and astronomy). Additionally, the task of programming in any of its various forms (be it typing an input cell in Mathematica or a authoring a program unit in a procedural language), instills understanding of the physics in a deep and meaningful way. There is perhaps no better route to understanding e.g. how a ray of light propagates through a dispersive medium than trying to "explain to a computer" how to make it happen. [PDF Slides]
There are many indicators pointing to the need to enrich, or even transform, the traditional undergraduate physics curriculum in view of the steady development of uses for computing in the sciences and engineering. I will argue that what distinguishes this need from others in the past is its challenge they we consider modifying the way we think about physics, not merely the list of topics for the courses or the methods of instruction. In a way this is a challenge of this conference, and thus it is fitting to describe the whys and wherefores as a way of introducing the conference agenda. However, given the way we have structured the sessions, I also must provide a context for the two talks that follow. To describe the need and its challenges, it is helpful start by looking at several technical examples – adaptive optics, stellar collapse precursors to supernovae, and biomedical functional imaging - where computational physics has been essential to addressing specific scientific and engineering problems. In fact, statistical survey data on where our graduates end up working and how well their academic experience has prepared them for their careers are relevant to these examples and help support a case reform. My intention is to offer a framework for the challenges that helps organize what follows: your contributions and discussions in this conference; and its starting point in this session, insights drawn from research practice that suggest directions for computational reform of not just our courses but also our curricula. [PDF Slides]
Modeling can be considered to be one conceptual step beyond using simulations in our lectures and computer labs sessions. Modeling can help us improve students’ understanding of Physics because they (better than simulations):
help students understand equations as physical relationships among quantities;
give students engaging, hands-on learning experiences;
serve as sketchpads on which students can explain their understanding to each other and to instructors;
and, facilitate student perception of misconceptions.
Moreover, learning to program a computer is becoming an important part of the capabilities expected from every Science or Engineering graduate in the XXI century.
However, asking students to create computer models imposes an extra technological barrier. Physics students want to learn Physics, not Computer Science. Tools are required that offer access to real scientific-programming tasks, while lowering the technical threshold required. And, yet, the resulting programs need to be far from technologically naïve.
We show in this talk how we solved this problem and how we use modeling both to motivate students to learn science and to become more computer literate. [PDF Slides]
Lattice Gauge Theory employs a number of numerical and statistical techniques including: sparse matrix inversion, Monte Carlo methods, higher order numerical integration schemes, resampling methods such as jackknife and bootstrap, and parameter estimation from correlated data. Many of these techniques can be taught to undergraduates in contexts more easily understood than a lattice gauge theory simulation. [PDF Slides]
Many fundamental ideas and tools of scientific computing are applicable across a broad spectrum of scientific disciplines. In the life sciences, mathematical and computational approaches are increasingly important, particularly in the rapidly developing field of genomics. The widespread use of high-throughput genome-wide experiments has produced massive datasets from which we may learn many secrets of life if we use the right tools. This talk will describe two areas of genomics research in which computational techniques are central to success: microarrays and synthetic biology. [PDF Slides]
Evidence and arguments will be presented that modifications in the undergraduate physics curriculum are necessary to maintain the long-term relevance of physics. Suggested will a balance of analytic, experimental, computational, and communication skills, that in many cases will require an increased inclusion of computation and its associated skill set into the undergraduate physics curriculum. The general arguments will be followed by a detailed enumeration of suggested subjects and student learning outcomes, many of which have already been adopted or advocated by the computational science community, and which permit high performance computing and communication. Several alternative models for how these computational topics can be incorporated into the undergraduate curriculum will be discussed. This includes enhanced topics in the standard existing courses, as well as stand-alone courses. Applications and demonstrations will be presented throughout the talk, as well as prototype video-based materials and electronic books. [PDF Slides]
Many significant scientific research questions are interdisciplinary in nature, involving physical and/or biological sciences, mathematics, and computer science; and much scientific investigation now involves computing as well as theory and experiment. Consequently, a critical need exists for scientists to know how to use computation in their work. With these issues in mind and with help from NSF (Grant 0087979) and the Shodor Foundation, Wofford College faculty members created extensive educational materials and developed an undergraduate Emphasis in Computational Science (ECS) for interested science and mathematics majors (http://www.wofford.edu/ecs/). We have affirmed that with appropriate foundations in mathematics and computer science, science majors can perform meaningful computational science research in internships, graduate school, and post-graduate positions. Besides citing particular student experiences and describing Wofford's program, this talk will include coursework and internship recommendations from "Undergraduate Computational Science and Engineering Education," a report from the Society for Industrial and Applied Mathematics (SIAM) Working Group. [PDF Slides]
The computational exploration of chaotic systems is a currently fashionable area of research that is easily within the grasp of undergraduates with moderate programming skills in most any language. Simple iterated maps such as the logistic equation and low-dimensional systems of ordinary differential equations such as the Lorenz equations offer a wealth of interesting behaviors but not much opportunity for undergraduates to make original discoveries. Much of the interest in nonlinear dynamics is now turning to high-dimensional networks that not only exhibit chaos but also relatively unexplored phenomena such as self-organization, pattern formation, symmetry breaking, and multi-stationarity. Such networks are ubiquitous in fields as diverse as ecology, economics, sociology, epidemiology, and neurology. What is lacking are algebraically simple examples of such high-dimensional systems that exhibit these behaviors and can thus serve as prototypes of complex systems, much as the logistic map and the Lorenz equations do for low-dimensional chaotic systems. This talk will describe a number of such systems that can be used to teach these topics and to form the basis for original research accessible to undergraduates. [PDF Slides]
Everybody recognizes that professors and students see physics problems differently. For physicists, graphs and equations open a direct window into the fundamental physics they describe, whereas students – even good students – can get stuck in the formalism and never make their way to the physical situation. This difference can seriously deter a student’s understanding of science.
One of the goals of using computers for teaching is to allow students to see physics as a scientist does. The use of computer simulations and, more importantly, programs written by the students themselves, can profoundly deepen their understanding. Ideally, computer simulations lead students to instantiate the physical situation and its dynamic variations, an approach far different from manipulating equations and graphs.
Unfortunately, professors and students also see computers differently – not always to the advantage of the professor! This creates its own problems and opportunities, which will be the subject of this talk. [PDF Slides]