We use several integrated on-chip converters as simulation examples
to measure running time and speedup. They include a Buck converter,
We use several integrated on-chip converters as simulation examples
to measure running time and speedup. They include a Buck converter,
and two boost converters.
Each converter is directly integrated with on-chip power grid networks,
and two boost converters.
Each converter is directly integrated with on-chip power grid networks,
-Fig.~\ref{fig:flyback_wave}
-and Fig.~\ref{fig:buck_wave}
-shows the waveform at output node of the resonant flyback converter
+Figure~\ref{fig:flyback_wave}
+and Figure~\ref{fig:buck_wave}
+show the waveform at output node of the resonant flyback converter
calculated in those cycles, and the segments without dots are the
envelope jumps where no simulation were done.
It can be verified that the proposed Gear-2 envelope-following method
calculated in those cycles, and the segments without dots are the
envelope jumps where no simulation were done.
It can be verified that the proposed Gear-2 envelope-following method
For the comparison of running time spent in solving
Newton update equation, Table~\ref{table:circuit} lists the time
For the comparison of running time spent in solving
Newton update equation, Table~\ref{table:circuit} lists the time
-costed by direct method, explicit GMRES, matrix-free GMRES,
+cost by direct method, explicit GMRES, matrix-free GMRES,
and GPU matrix-free GMRES. All methods carry out the Gear-2 based
envelope-following method, but they handle the sensitivity and
equation solving in different implementation steps.
It is obvious that as long as the sensitivity matrix is explicitly formed,
and GPU matrix-free GMRES. All methods carry out the Gear-2 based
envelope-following method, but they handle the sensitivity and
equation solving in different implementation steps.
It is obvious that as long as the sensitivity matrix is explicitly formed,
products implicitly, the computation cost is greatly reduced.
Thus, for the same example, implicit GMRES would be one order
of magnitude faster than explicit GMRES. Furthermore, our GPU parallel
products implicitly, the computation cost is greatly reduced.
Thus, for the same example, implicit GMRES would be one order
of magnitude faster than explicit GMRES. Furthermore, our GPU parallel