Open question: how specific to Rennes campus is the grid voltage shape?
Three-phase grid voltage record (output of a 15Vrms transformer) at CentraleSupélec, Rennes campus
Grid voltage harmonics analysis
More grid voltage records data (by RTE)
That being said, after a tiny data survey, I found that RTE made available last year a dataset of 12053 records of grid records (specifically grid fault events), i.e. a much richer database! https://github.com/rte-france/digital-fault-recording-database
Context: grid connected inverter
This was created in the context of another investigation: the harmonic current perturbation in a grid connected inverter. And in the end: the harmonic current looks indeed proportional to the harmonic voltage, so the grid imperfection (albeit perfectly within compliance standards) is the likely cause (along with the simplistic inverter control which make no effort to reject harmonics)...
Oscilloscope record of harmonic current perturbation in a grid connected inverter (grid following current control, with current set point equal to 0.0 A)
[In French] I've uploaded the documents I used last month for a one-hour lecture at a primary schools (7-8 year old children), with a theoretical part on electrical energy conversion and a pratical part with small PV panels.
This post is meant to help people in search of a refurbished notebook. Indeed, it’s easy to get lost in the processors references (“how does an Intel Core i5-8250U compares to an i5-10210U?” Spoiler: they are almost the same!). It is the outcome of me researching whether it is worth replacing my Lenovo Thinkpad T470 notebook, a 2017 model bought 2nd hand in 2020. While the conclusion is that I’ll probably wait a bit more, this was a nice data research about the evolution of the computing power of office notebooks over the last decade.
In this post, I focus on key results while the full dataset and associated Python code is available in the GitHub repository https://github.com/pierre-haessig/notebook-cpu-performance/. Also, there is a CPUs table, with characteristics and performance scores hosted on Baserow. Testing this so-called “no-code” cloud database was an additional excuse for this project... While I don’t want to comment my experience here, I’ll just say I found it a neat alternative to spreadsheets for doing database-style work (e.g. creating links between tables).
Dataset description
About the scores: I’ve extracted computing performance scores from the Geekbench 6 data browser https://browser.geekbench.com/ which conveniently make their database public (in exchange of Geekbench users automatically contributing to the database for free). This test provides two scores:
Single-Core score is useful to assess the execution speed of one application (webmail, text editor, spreadsheet...)
Multi-Core score is geared towards multitasking (or some computations which can exploit all processor cores like video encoding, I believe).
Geekbench explains that these scores are proportional to the processing speed, so a score which is twice as large means a processor that is twice as fast, that is a computing task completed in half the time, if I got it right.
About the processors selection: I’ve extracted the scores for CPUs found in typical refurbished office notebooks, that is midrange & low power CPUs, aka Intel Core i5 processors, from the so-called U series, but also corresponding AMD Ryzen 5. The earliest model is the Intel Core i5-5200U launched in 2015, while the most recent is the Intel Core Ultra 125U/135U from late 2023 (there wasn’t enough data for the 2024 Intel Core Ultra 2xx models). Notice that there are no refurbished notebooks of 2023 yet on the market, so there is some amount of guessing on what will be the "typical office notebook CPUs" of 2023-2025.
CPU performance charts
Enough speaking, here are two graphs of processor performance.
1) Ranked CPU performance
First a dot plot/boxplot chart showing CPU performance, ranked by Single-Core performance (which is close, but not quite, the same as the Multi-Core rank):
Chart of Geekbench CPU performance of typical office notebooks
The horizontal scale for the score is logarithmic, so that each vertical grid line means a difference by a factor 2. This allows superimposing Single-Core (blue) and Multi-Core (green) data.
The diamonds are the median scores, while the lines show the variability. Indeed, there is no unique value for each CPU, due to the many factors which can affect computing performance: temperature, power source (battery vs. AC adapter) and energy saving policy (max performance vs. max battery life). To account for this, I’ve collected about 1000 samples for each CPU and summarized them by computing quantiles. The thick lines show the quantiles 25%–75% (also called interquartile range, which contains half the samples) while the thin line ended by mustaches shows the quantiles 10%–90% (the most extreme deciles which contain 80% of the samples).
Notice that to avoid data overcrowding, this the “short” version of the chart where I picked one CPU model per generation, while there are typically two (like i5-1135 and i5-1145, with very close performance). The full chart variant (twice as long) can be found in the chart repository, along with the variants sorted by Multi-Core score.
2) CPU performance over time, with trend
The second chart shows the evolution of CPU performance over time. Only the median score is shown here, to avoid overcrowding the graph, but with all the CPU models of the dataset. This graph is interactive (perhaps a mouse is required, though): hovering each point reveals the processor name and details like the number of cores.
[EDIT: for some unknown reason, the tooltip which is suppose to appear when hovering points has an erratic display (pops up a large blank rectangle). In the mean time, I've put the charts on a dedicated page which works fine]
Vega viz of CPU performance over time, linear scale
The dot color and shape discriminate the CPU designer: Intel vs. AMD (the classical term “manufacturer” is not appropriate for AMD, neither for the latest Intel products). The gray dotted line shows the exponential trend, that is a constant-rate growth trends, in the spirit of Moore’s law (doubling every 2 years), albeit with a much smaller rate, see values below.
This second plot was done with Vega-Altair which is very nice for data exploration and interactivity. However, it is perhaps a bit less customizable than my “old & reliable” plotting companion, Matplotlib, used for the first chart. In particular, I didn’t manage to replicate to the nice log2 scale, created by manually setting the ticks position and label (see Retrieve_geekbench.ipynb notebook, “Log2 scale variant”), but could be achieved more cleanly with Matplotlib’s tick formatter. So here is the log2 variant of the performance over time chart, albeit with the raw log2 scores. With this scale, the exponential trend becomes a straight line.
Vega viz of CPU performance over time, log scale
How to read these raw log2 scores: values [0, 1, 2, 3] correspond to [1000, 2000, 4000, 8000], while the general formula is y = 1000 × 2x. Also, because there are grid lines for half scores (0.5→1000×√2 ≈ 1414), scores which are different by factor 2 are here separated by two grid lines instead of just one in the first plot.
Comments of CPU performance
What can be said from the charts above?
1) General trend
Sure, there has been some progress in CPU performance over the decade!
but the progress is faster for Multi-Core than Single-Core performance. In detail, the exponential trend fitting yields:
Single-core trend: +12% per year (±1%), or, said differently, a time to double the performance of about 6.25 years (±10 months)
Multi-core trend: +20%/y ±2%/y, or a time to double the performance of about 3.75 years (±4 months)
→ In both cases, the growth well slower than the 2 years (that is +41%/y) of Moore’s law.
2) Effect of increased core count
There is a clear effect of diminishing returns within the general trend to add more CPU cores:
From about 2010 until 2017 (ending with Intel 7th gen Core i5-7*00U), notebook CPUs shipped with 2 cores, with a Multi-Core score ending close to twice the Single-Core one (2000 vs 1000 for Intel Core i5-7300U)
From 2018 to 2021, notebook CPUs had 4 cores, but with Multi-Core to Single-Core scores ratio between 2.5 and 3.0 (at least for median scores)
Fast-forward to 2024, the most recent CPUS come with 6 (AMD) or even 10–12 cores (Intel, albeit with only 2 P[owerful] cores), and the Multi-Core score finally gets close to 4 times the Single-Core one (8000 vs. 2000 for Core Ultra 125U).
3) Off the trend
The exponential fits gives a false sense of a smooth progress in performance, while the true evolution is a bit more stepwise, with some processors “lagging behind” and others “ahead of their time”:
AMD CPUs before 2020 (Ryzen 7 PRO 2700U and Ryzen 5 PRO 3500U) are lagging behind their Intel counterpart, especially for Single-Core performance. Successors have closed the gap (notably with the Ryzen 5 PRO 5650U).
From 2018 to 2020, Intel has kept releasing “re-re-refreshed” versions of its 2017 Kaby Lake (7th gen) architecture, so that the difference between the Intel Core i5-8250U (mid 2017), i5-8265U (mid 2018) and i5-10310U (early 2020) is pretty thin.
Performance of Intel processors makes nice leaps with the 11th and 12th gen (Core i5 11*5G7 and 12*5U), but things get more mixed starting 2023 with the new branding of “Core Ultra” processors, which comes with more variants (V and U series in the 2nd gen). Time will tell how this evolves further.
4) About performance variability
Up to this point, I’ve only commented the median score, while deferring the discussion about performance variability shown on the first plot.
First things first: the performance variability of notebook CPUs is quite large, with almost a factor 2 between the top 10% and bottom 10% (90% and 10% quantiles)
The Single-Core score distribution is left-skewed, that is the median and top 10% scores are often close together, while most variability comes in the left tail of low scores. My interpretation is that the top score indicates the rated chip performance and the fact that the median is quite close means that about half the notebooks ride close to it. This means that it’s quite easy to experience this rated performance in practice. Still, the bottom half can get much slower, but perhaps these are cases where notebooks are running in a “low power/max battery life” setting. If that’s the case, it is not bad news, because notebook processors are designed to dynamically adjust their frequency & voltage to save energy.
This is why I always advise my students facing an unresponsive notebook to plug their AC adapter, especially when I make them run some microgrid optimization code.
Still, my father experienced a disappointingly low score of 1200 with its i5-1245U powered laptop (median score is 1950), while we took care to plug the AC adapter and set the power strategy to max performance.
The Multi-Core score distribution is, on the other hand, much more symmetric. I interpret this as meaning that it may be more difficult to reach the rated Multi-Core performance, at least with notebook CPUs. I guess that when all cores works simultaneously, the processor easily hits its maximum power or temperature limits, while one core working alone has much more to “express its talent”
What is left to be explored is the effect of notebook’s chassis. Seeing for example the pretty large variability of Intel Core i5-10210U, perhaps there exists clusters showing that the bottom performance comes from some models. Indeed, part of the power management of notebooks is under the control of the manufacturer. Question example: out of the Dell Latitude 5410 and the HP ProBook 440 G7 which both feature this CPU, is one better than the other?
Conclusion
Et voilà, I guess that’s enough for one post. Of course I only covered computing metrics. For example, when I said in the intro that Intel Core i5-10210U and i5-8250U are almost the same, these processors are still 2 years apart and the corresponding notebooks may come with different Wifi and Bluetooth connection standards, so that the more recent10210U may still be more interesting.
In a following post, I will write on the second part of this dataset: the notebook offers table, which links the system price to its CPU performance, to get Pareto-style trade-off chart of price vs. performance. Notice that this dataset and the draft plotting code are already available in the repository (Offers_plot.ipynb notebook and Baserow Notebook offers table).
This is a test post to see how easy it is to embed Vega-Lite data visualizations to a WordPress page. Oddly enough, I didn't find an all-in-one tutorial for this. If you see an interactive bar chart below (interactive means: bar color reacts to mouse hover), it means this method work!
vega viz will go here
Integration approach: I'm using the Vega-Embed helper library, since it is described as the “easiest way to use Vega-Lite on your own web page” on the official Vega-Lite doc. In this post, I'm essentially adapting Vega-Embed's README for WordPress. Two steps are necessary:
Import the (three) necessary Javascript libraries in the site header
Paste the magic code snippet for each graphics to be embedded
Importing Vega javascript libraries
Adding custom javascript scripts to the <header> section of a WordPress site is extensively covered:
I chose the extension route, with WPCode since it seems the most popular option. In the end, I find it does the job, albeit having perhaps too many features compared to what is needed for this task.
The objective is to inject the the following html code in the site header, which will load Vega, Vega-Lite and Vega-embed:
with WPCode installed, this requires clicking in the dashboard:
Code Snippets / + Add Snippet
In the Add Snippet page, select the first “Add Your Custom Code (New Snippet)”/“+Add Custom Snippet” button (don't get overwhelmed by the pile of other tiles... this is where I claim that this extension is a bit too much for the task)
In the Create Custom Snippet page, you're again greeted with another pile of tiles to select the code type. Again, the first choice “HTML Snippet” if the good one
Paste the four lines above in the Code Preview text editor
Give it a title (like Vega import), Save and Activate the Snippet
Other than that, all the options below the editor are fine, in particular the Snippet insertion location which defaults to “Site Wide Header”. If you open the source code of this page (Ctrl+U) and see the lines with <script src="https://cdn.jsdelivr.net/npm/vega... (scrolling down to about line 206...), this means the method worked for me!
Restricting Vega import to specific pages (optional)
To avoid importing Vega on every page, I've used the optional “Smart Conditional Logic” of WPCode Snippet editor. There it can be specified that the Snippet should show only on specific pages like this one (in the “Where (page)” panel). In truth, the “Page/Post” condition is restricted to the PRO version, but “Page URL” is available.
Several pages can be specified by “+Add new group”, since groups of rules combine with boolean OR (whereas rules within a group combine with AND).
With this display logic, the Vega import line shouldn't be visible on most pages of this site like Home.
Adding the visualization code
Once Vega libraries are imported, there remains to add to the core of the page the two bits of HTML code which will load the specific visualization we wish to display:
a block tag, typically an empty <div>, with a uniquely chosen id, where the visualization will appear
a <script> tag which will read Vega's JSON visualization description (the “spec” in Vega's word) and load it to the target tag, with a call like vegaEmbed('#vis', spec)
Here I'll use again the example of Vega-Embed’s README:
<div id="vis">vega viz will go here</div>
<script type="text/javascript">
var spec = 'https://raw.githubusercontent.com/vega/vega/master/docs/examples/bar-chart.vg.json';
vegaEmbed('#vis', spec)
.then(function (result) {
// Access the Vega view instance (https://vega.github.io/vega/docs/api/view/) as result.view
})
.catch(console.error);
</script>
Using the WordPress block editor (Gutenberg editor), I'm adding “Custom HTML” block with that content. In fact, it's possible to split the target tag and the script in two blocks (again if you look at the source code with Ctrl+U, you should see the script just below, while the block tag with id="viz" is above).
At the VéloMix hackathon at IMT Atlantique in Rennes, I met with Guillaume Le Gall (ESIR, Univ Rennes) and Matthieu Silard (IMT Atlantique) working on battery charging monitoring. We ended up working on the problem of State of Charge (SoC) estimation, that is evaluating the charge level of a battery by monitoring its voltage and current along time (and notably not its open circuit voltage).
I had heard SoC estimation was often performed by the Kalman filter (and in particular with EKF, its extended version), but I had never had the occasion to implement it. Time was too short to get it working on the day of the hackathon, but now I have drafted a Python implementation, or more precisely three implementations:
Step-by-step literate programming version of the filter, using a sequence of notebook cells, to implement one step of the filter → Nice to see the algorithm crunching the data line-by-line
Generic Kalman filter implementation (all the above steps wrapped in a single function, should work with any state space model)
Compact implementation specialized for SoC estimation with baked-in battery model (this last one should be the most useful to project, ready to convert to Arduino/C++ code)
All these are available in a single Jupyter notebook:
(Post in French, since it’s about a presentation in French...)
Ma présentation « Optimisation des microréseaux » faite à l’École normale supérieure de Rennes pour les Rencontres Mécatroniques est disponible en ligne. À destination des étudiant.e.s en mécatronique, elle se voulait assez pédagogique pour expliquer les enjeux de dimensionnement et gestion d’énergie des systèmes énergétiques.
Présentation « Optimisation des microréseaux » @ENS Rennes (27’ présentation + questions). Pas facile de se ré-entendre avec tout ses tics de parole !
Beaucoup d’idée développées dans le cadre de la thèse d’Elsy El Sayegh (soutenue mars 2024) avec mon collègue Nabil Sadou et qu'on continue à explorer avec Jean NIKIEMA, nouveau doctorant de l’équipe AUT !
I’ve put online (on HAL) the manuscript I’ve submitted to PowerTech 2021 (⚠ not yet accepted [Edit: it was eventually accepted. Presentation video is available]). It discusses the modeling of energy storage losses. The question is how to model these losses using convex functions, so that the model can be embedded efficiently in optimal energy management problems.
Key idea
This is possible thanks to a constraint relaxation: losses are assumed to be greater than (≥) their expression, rather than equal (=). With a convex loss expression, the resulting constraint is convex. In applications where energy losses tend to be naturally minimized, this works great. This means that the inequality is tight at the optimum: losses are eventually equal to their expression. Notice that in applications where dissipating energy can be necessary at the optimum, this relaxation can fail: losses are strictly greater than their expression, meaning that there are extraneous losses!
Germination
I’ve had this topic in mind for some years now, with early discussion at PowerTech 2015 with Olivier Megel (then at ETH Zurich). Fast-forwarding to spring 2020, I had a nice exchange with Jonathan Dumas and Bertrand Cornélusse (Univ. Liège) which provided me with related references in the field of power systems.
After some experiments at the end of the last academic year, I’ve assembled these ideas during the autumn (conference deadline effect), realizing that, as often, there was quite a large amount of literature on the topic. Papers that I would have stayed unaware of, if not writing the introduction of this paper! Still, I found the subject was not exhausted (also a common pattern) so that I could position my ideas.
Contribution
I believe that the contribution is:
Describe the relaxation of storage losses in a unified way which handles the various existing loss models
Provide a simple continuous family of nonlinear convex loss expressions which can depend on both storage power and energy level.
The typical effect being covered by the loss model I propose is when power losses (proportional to the squared power) varies with the state of charge (typically at the very end of charge or discharge). In short, it unites and generalizes classical loss models, while preserving convexity.
I don’t think this will revolutionize the field, but hopefully it can help people in energy management using physically more realistic battery models, while still keeping the computational efficiency of convex optimization.
Classical loss models (above) are piecewise linear (e.g. constant efficiency) or quadratic in storage power (Pb). Few models also depends on the storage energy level (Eb). The proposed model unites and generalizes classical loss models, while preserving convexity. See manuscript for the meaning of parameters a and b.
More than a year after the start of its development, I’m now pushing the “final” version of my tiny web app “Electric vs Thermal vehicle calculator, with uncertainty”. It allows comparing the ecological merit of electric versus thermal vehicles (greenhouse gas emissions only) on a lifecycle basis (manufacturing and usage stages).
Compared to alternatives (featuring nicer designs), this app enables changing the value all the inputs to watch their influence, thus the name “calculator”. Of course, a nice set of presets is provided. Also, it is perhaps the only comparator of this kind to handle the uncertainty of all these inputs: it uses input ranges (lower and upper bounds) instead of just nominal values. Therefore, one can check how robust is the conclusion that electric vehicles are better than thermal counterparts (spoiler: it is, unless using highly pessimistic, if not biased, inputs).
History of this app
I started this app in spring 2019, after I came across the blog post “Electric car: 697,612 km to become green! True or false?” by Damien Ernst (Université de Liège). At that time, I was not familiar with the lifecycle analysis of electric vehicles (EVs) and the article seemed at first pretty convincing. I was surprised to discover the sometimes violent feedback Pr. Ernst received. Fortunately, I also found more constructive responses, e.g. from Auke Hoekstra (Zenmo, TU Eindhoven), which I soon realized to be a serial myth buster on the question of EV vs thermal vehicles(*).
I quickly realized that the key question for making this comparison is the choice of input data (greenhouse gas emissions for battery manufacturing, energy consumption of electric and thermal vehicles …). Indeed, with a given set of inputs, the computation of the “distance to CO₂ parity” is simple arithmetic. I created a calculator app which implements this computation. This enables playing with the value of each input to see its effect on the final question: “Are EVs better than thermal vehicles?”.
Handling uncertainty
While discussing Ernst’s post with colleagues, I got feedback that perhaps the first big issue with his first claim “697,612 km to become green” is that conveys a false sense of extreme precision. Indeed, high uncertainty around some key inputs prevents giving so many significant digits. And indeed, it is because several, if not all, inputs are open to discussion (fuel consumption: real drive or standard tests, electricity mix for EV charging: 100% coal or French nuclear power…?) that there are all these clashing newspaper articles and Twitter discussions piling up!
Searching for good quality inputs
When I started the app, I thought I would save the time to research for good sets of inputs, since it is a calculator where the user can change all the inputs to his/her choice. This was a short illusion because I soon realized that the app would be much better with a set of reasonable choices (presets), so that I needed to do the job anyway. This is in fact pretty time consuming, since there is a need for serious research for almost each input. Now, I don't consider this work finished (it cannot!), but I think I've spent a fair attention and found reliable sources (e.g. meta-analyses) to each input. My choices are documented at the end of the presentation page.
In this process, I've learned a lot, and the main meta knowledge I take away is that inputs need to be fresh. Indeed, like in the field of renewables, the main parameters of EV batteries are changing fast (increasing size, decreasing per kWh manufacturing emissions…) and so is the European power sector (e.g. the emissions per kWh for EV charging).
(*) Auke Hoekstra was perhaps irritated to endlessly repeat his arguments everywhere, so he collected his key methodological assumptions in a journal paper “The Underestimated Potential of Battery Electric Vehicles to Reduce Emissions”, Joule, 2019, DOI: 10.1016/j.joule.2019.06.002 (with an August 2020 update written for the green party Bundestag representatives).
À cause (ou grâce) au confinement, j’ai enseigné à distance mon cours d’électrotechnique « Énergie Électrique » de mai–juin 2020.
Pour tenter de « garder dans le rythme » les étudiantes et les étudiants au fil des 5 semaines du cours, j’ai créé une série de questions d’entrainement (majoritairement des QCM) pour la plateforme Moodle. Les voici à présent sous forme d’archive à télécharger (texte + fichier directement importable dans Moodle), pour d’éventuel·le·s collègues intéressé·e·s.
This is just a short note that may be useful to other people with a similar setup. I just installed a fresh Windows 10 virtual machine (VM) on my Linux system (Debian 10 “buster”, with VirtualBox 6.1.4 coming from Lucas Nussbaum unofficial VirtualBox repository).
One of the main objectives of this VM is to have Microsoft Office 365 for my work. Unfortunately, I faced a “blank window” issue which makes it totally unusable! First symptoms appeared around the end of the installation setup, but I didn't pay much attention since it still completed fine. However, at the first launch of Word (or PowerPoint or Excel), I got two blank windows: on the background a big white window which was probably Word, and on the foreground a smaller blank window, perhaps the license acceptation dialog, where I was probably supposed to press Yes if only I could see the button...
Now, I made some searches, and I did find some similar symptoms, but no adequate problem solutions. Only, I learned that the problem may come from Office use of hardware acceleration for display. Since I had previously activated the 3D hardware acceleration in the VM settings (it sounded like a good thing to activate), I tried to uncheck it, with no effect.
Then, it's possible to disable Office hardware acceleration, and this worked for me. However, among the three ways to do it I found, only the more complicated worked (regedit). Here are the three options:
Opening “Word Options/Advanced tab” to check the box “disable hardware graphics acceleration”: obviously impossible when Word starts with a blank screen...
Starting Office in safe mode: I had read this would temporarily disable hardware acceleration so that I could use option 1
Disabling hardware acceleration by manually editing the registry (source: getalltech.com)
In the end, only the arcane option 3 worked. As mentioned in the source, on my fresh Windows 10 setup, the subkey Graphics needs to be created inside HKEY_CURRENT_USER \Software \Microsoft \Office \16.0 \Common (the source says that version “16.0” is for Office 2016, but it seems Office 365 in 2020 is also version 16). And then create the DWORD value DisableHardwareAcceleration and set its value to 1.
Disabling Office 365 hardware graphics acceleration using Windows Registry editor (regedit)
After a reboot (is it necessary?), I could indeed run Word 365 properly and accept all the first run dialogs. It seems to run fine now. Phew!
Office Word 365, on Windows 10, inside VirtualBox, under Debian... quite a few spatial prepositions!