Report by Jack Cole — Apparent Excess Heat Produced in New Experiment

Thanks to Josh G for sharing this.

A new post on Jack Cole’s LENR-Coldfusion website reports on a new experiment to try and replicate Alexander Parkhomov’s work he has completed which shows an apparent energy gain in the production of excess heat.

Jack used a nickel powder and lithium aluminum hydride mix inside a 9″ long alumina tube 1/4″ ID (inner diameter) and 3/8″ OD (outer diameter). Empty space within the tube was taken up with an alumina rod ceramic oxide/alumina mix.

Jack used a PID controller (which employs a feedback loop to maintain a steady temperature by varying the input power) and tested his system at various temperatures; the chart below shows the different temperature levels tested, and calculated COPs based on three different calculation methods. (See the article for more details about the different calculations)


He concludes his report:

Consistent with the findings of Alexander Parkhomov, this experiment demonstrated apparent excess heating in temperature regions above 1100C utlizing two separate sets of measurements (heat flux and calibration curve for the tube temperature below 1100C). The COP values ranged from 1.3 to 1.8 based on the reaction tube surface temperature and 1.05 to 1.5 based on the re-calibration of heat flux. I invite the reader to criticize this work so that we may determine if these results are related to LENR or related to some unrecognized error.

  • Ged

    Well done. Good to see yet another success, particularly one so carefully done. Even at that most conservative case, a 50% increase us no small matter. Guess we need to find ways to improce, perhaps by testing new fuel mix ideas (impurities) with this working design.

  • SG

    Nice report–well done. It might be interesting in a next run to add a gigahertz signal to the mix to see what effect it might have.

  • pg

    is anyone presenting a working device at the conference?

    • there is a dogbone test in process.
      It was planned last evening, but it is taking more time to be clean…
      calibration is in process.

  • Gerard McEk

    Great work Jack! It is difficult to criticize your work, for that one needs more detail, however, is it possible to do a long duration test? Did you have any HAD experience or did you tryto measure the heat output after you switched off?

    • Bob Matulis

      Agreed more details needed. Jack, did you use the Parkhomov method of measuring energy generated? (boiling off water) Also, as Gerard suggested a long duration test would be beneficial.
      Long test plus reliable measurement will produce more defendable results. Also a long run could create measurable amounts of transmutations.

  • LuFong

    I would expect the COP to go up as the temperature rises–exponentially in fact. Are we seeing any of this? It doesn’t appear so. Maybe at 1200C+? Still it’s good to see progress being made. Thanks for sharing your work!

  • Daniel Maris

    Now this is much more interesting… 🙂

    I hope you have continued success Jack as you discover more about processes involved.

  • Obvious

    I am starting to get very mixed feelings about these reports.
    It is mixture of frustration and disappointment.
    Does no one know how to do a clean experiment anymore?
    One control is not enough control, any more than one null means that a condition is permanently falsified, or one positive proves an effect. Controls and calibrations that have mid-experiment changes are not adequate controls or calibrations. Control information gathered from other experiments that are not the same is at best anecdotal. Making graphs and statistics from hardly any data points is pointless. Drawing conclusions from scanty data is hardly better than speculating.
    Please read the following simple school experiment description, and consider the points made. Better still, build the simple device, and run the experiment.
    I know you guys are working hard with your own money, time, and limited resources. I appreciate the efforts. But not doing the basics of proper control and calibration is quite possibly wasting your time and resources if nothing can ultimately be proven or disproven beyond the range of experimental uncertainties. When the uncertainties are not quantified, the experiments can prove nothing. At best you may improve your skills with the apparatus, so the work is not a total loss.

    • curious

      Very good point. The number of “experimental results” is irrelevan without a credible experimental process.

      • Ged

        To be fair, Oblivious is mistaken about a few things. Jack’s two separate calibrations were done across the full domain of input power he uses, so they are sufficient. For the same input power, he got much higher heat than with a surrogent aluminum rod rather than fuel as used in calibration. The system may not be perfect, but it is certainly sufficient.

        • Obvious

          Two calibrations. Whoopee. If GM or Ford did that for MPG numbers they would have no idea when the vehicle actually saved fuel. Toyota or Honda would clean their clock in the MPG race…. Oh, my.

          • Ged

            Haha. Well, let’s not even get into the MPG world, even by example–plenty of intentional scamming going on by those parties

            Who even knows what they actually do, but it is not accurate to real world. I’ll take two calibrations over marketing ploys any day; and more calibrations over two any day as well.

            Seriously though, I agree with you that more is obviously better (more accurate, better variance constraint, and more signal to noise).

          • Obvious

            MPG isn’t as bad an example as it first seems, although I get your point. There is tremendous social and personal pressure to get a positive XH result. It is all to easy to let bias creep in, even subconsciously. Where are the negative temperature “COP” results from the many experimenters? Surely the “errors” cannot be mostly positive in real life. But if you look at most of the reported experiment results, positive heat errors/artifacts/uncertainties seem to swamp the number of slightly negative ones. Not reporting the negative experiments is part of the reason, but these results are at least as important as the positive ones. The lack of slightly negative results implies a far greater level of measurement precision than there probably actually is, and I in fact know there is, from testing and research into TC readings (ongoing).

          • Omega Z

            I too would like to see the negative results, But sadly, we live in a world where 1 negative seems to trump 10 positives. I understand why the negatives are seldom admitted.

            Note the MS Scientists who claim if you can’t replicate it 100 times with 100% positive results, then it is not real. They should get real. You can’t guarantee that with any products available today. Manufactures strive to keep fail rates down to 3%.

            And, IMO, negatives can be a learning tool as well as the positives. They tell their own story.

          • Obvious

            I don’t merely mean negative results, but actual negative COP results.

          • Omega Z

            Yes. Those too can teach.

        • Sanjeev

          Just a few suggestions on the matter of calibration, since its so important.

          1. Calibration should be done for the full range, or if possible beyond the range
          2. Both step up and step down readings should be taken
          3. Data for HAD must be taken for both control and active run.
          4. Another round of calibration must be done after the active run. The intense heat and high power changes everything physically, from sensors to instruments and materials. Sometimes the sensors change position or degrade.
          5. Nothing should be changed after the calibration.

          I have rarely seen all this being done, neither Lugano, nor MFMP or Parkhomov or anyone else follows all the above. Result – we are left scratching our heads every time.

          • Ged

            Those are all good suggestions, but I don’t think some of the reasoning is that applicable or dramatic in most cases. Particularly point four, where degredation lowers heat reported (calibration must always be first because of this), and couples don’t dramatically move enough to effect readings if at all (especially when cemented as in this case).

            Point 3 only matters if you’re measuring HAD to begin with, though most traces should include it.

            Point 5 is impossible in the regards that adding fuel is the variable under change and investigation, but otherwise yes–changes should be as limited as possible, or otherwise noted and their contributions tested.

            Point 2 only mostly matters for kinetics as we want stable over time behavior for COP not the rate of change, though a full time trace will include such. Step downs are not that revent other than if looking for HAD or kinetics.

            But point 1 I absolutely agree with. In this case, such was done, which is good.

          • Zack Iszard

            Nickel powder missing the LAH is a perfect sub for the active powder to make Point 5 doable. Since the active powder is mostly nickel, the thermal absorptive and emissive properties would be reasonably similar. Heck, add some aluminum powder to replace the missing LAH (it’s 71% of LAH by mass and would be thoroughly molten at target temps) to be extra sure.

            All 5 of these points, and possibly more, must be obeyed for results to be firm. At the end of the day, a good Parkhomov replication is a (obviously) well-calibrated, data-rich heater setup where the only change from control to experiment is the powder charge.

          • Ged

            “must be obeyed”? I completely disagree, though I see where you are coming from. All but point one have serious flaws as they are based on assumptions that do not hold in every case and thus distract from actually evaluating experimental design.

            For just two example instances: If you aren’t evaluating HAD specifically, calibrating for HAD is useless and distracting. Book end calibration a could be useful in some cases but Only the first calibration directly before the run is valid for comparison to avoid sensor breakdown errors (which could over cool the second calibration), unless the design has potential physical changes that need re-evaluation. Doing 3 calibration a in a row a cross the full temp range would do the Same. Therefore insisting on and relying on an after calibration is dangerous in most cases due to potential data quality loss, unless you are trying to evaluate that quality loss–not applicable in most simple experiment cases. This is why experiments must be individually evaluated and designed and not just blanket statemented.

            Nickel powder would make a good control to help eliminate another changing variable except you would have to prove it has become fully inert, for nickel is an active infer foamy and hydrogen is eeverywhere such as stored in water vapor in air. If you can’t prove it, it weakens and does not help the control. This has been discussed to great lengths in the past.

            Designing experiments in science requires only 3 things, and we must always design and evaluate based on these three things, not any arbitrary wishes: control variables, independent variable, and dependent variable.

          • Sanjeev

            Of course, one would like to add the fuel for active run. That is not the point of #5.

            Lets say the TC burns out at the end of the calibration and the experimenter simply replaces it with a new one and starts the active run. This will introduce a new variable and will make the active run useless, since the TC with which you calibrated is not the same as the new one, its not good to assume that they are same, even if of same brand. Probably the position and contact is also changed now. Ideally, if the TC burns out, the calibration must be repeated with new TC. This is what I mean by nothing must be changed after calibration.

            The reply of Zack below is also a very good tip.

          • Obvious

            If several different thermocouples were previously systematically tested in control characterization runs, then the possible range of error associated with the thermocouple replacement could be estimated. Otherwise all you have is relative temperature data based on results from one TC, which could itself be anomalous in its behaviour compared to “most” TCs. Previously tested and individually identified thermocouples would further increase the confidence when changing TCs, should this be necessary.

          • Ged

            Errr, but you are then making the case the TC is always replaced between first calibration and active run, otherwise you don’t have additional support for point 5. If the TC remains the same and never touch and cemented and thus immovably in place, these point 5 assumptions fail.

            A better version of point 5 is just to say all equipment must be calibrated before a run and Not changed. Otherwise, must calibrate again. Not saying one shouldn’t do a book end calibration, as it could be useful particularly to evaluate long term equipment performance, just that it adds little value compared to calibrating first which must be done, and is not necessary for analyzing results at all.

    • Ged

      Perhaps we need to get you in command for running an experiment ;). Have you paired with an experimentalist, and given their materials you design the protocol to follow, or could design several protocols to cover the common setups.

      • Obvious

        I am running my own. Repeating a process enough times, and making small adjustments, one at a time, in order to design a reproduceable control, and verify the calibrations is easy. Everyone wants to jump to the finish line, without preparing for or sometimes even running the race. I can see the allure of getting the prize of XH. The draw to it is strong. But to win it properly, the boring training part must be done.

        • Ged

          Make sure to let us know how it goes then!

        • Omega Z

          I don’t see Jack making any bold claims. I see someone asking for some constructive criticisms through crowd sourcing. Maybe the title could have been worded better.

          Jack is looking for opinions & critique of what he’s doing in order to improve his experiment. I believe this can be done in a cordial manner. If your doing such work yourself, I think you would appreciate the same treatment. There aren’t no bad guys here. Just people looking for answers & maybe a little help with a complex issue.

    • Sanjeev

      I agree with you in general. But about this specific experiment, Jack already told that he was not very sure if it deserves publication, but its good that he published it anyway. The good thing is there is no claim of a definite excess (apparent is the key word), so its just data, he is giving away, take it or leave it.
      With the correct fuel mixture and anomalous results this is the right time to start calorimetry, which as he says, is his next plan. So everything is going as it should, one step at a time.

      • Obvious

        Yes. I understand it is preliminary and rough. But the average folks ignore that and go bonkers over another success. I would say that it replicates Parkhomov quite well, because he also is a lazy control and calibration doer.
        There is no way I would go to calorimetry when the thermometry is so unconsolidated. But I am only making a point, not telling anyone what to do.

        • Sanjeev

          Well then, for the benefit of average folk, I recommend everyone add a disclaimer to their reports that the results can be either positive or negative ;-).

          • Andre Blum

            Alternatively, you can scale up and make a heat plant for industrial use, then show us your savings on the power bills.

        • ecatworld

          I’m glad that Parkhomov, Jack and others are sharing their results. They are instructive and encouraging, even though there may be imperfections or shortcomings in their protocols.

          Readers can learn from their contributions, help them by providing feedback, and hopefully progress can be made through communicating back and forth.

          • US_Citizen71

            Open science. Isn’t what the internet was made for. 🙂

        • Daniel Maris

          I think you are confusing enthusiasm for the possibility of LENR with enthusiasm for these particular test results. I expect many people are like me: they get interested in positive results and want to hear more about them – but it doesn’t mean they suspend disbelief…they want to hear objections and see a positive process of improvement in experimentation take place.

      • Daniel Maris

        I agree. Jack is looking for ways to improve on the test runs. No one loses on that approach. Let’s see what happens when he takes on board suggested improvements.

    • LuFong

      Perhaps in an environment where failure is the most likely outcome, the best approach is to perform experiments until the test is shaken out. At that point rerun the experiment using proper controls and calibrations, possibly by others. This is how science operates anyway at the larger scale–one person’s experiment is never enough to validate an effect no matter how many controls and calibrations are used. Perhaps these reports can be considered “investigations”. Good points however.

    • Thomas Clarke

      Getting excess heat in a flakey experiment does not help anyone. You cannot know whether the results are the experiment error or some interesting effect.

      The only helpful work is to do experiments that are well controlled and have tight and well understood errors.

      The sign that in fact there is no real effect is if such good quality carefully controlled experiments with low errors have no excess beyond error. And vice versa.

      Given good experiments there is room for as much investigation as you want to see whether different conditions result in real excess.

      • Daniel Maris

        How about talking English?

    • Daniel Maris

      Think you’ve got to make a distinction in your criticisms between a “collective” like MFMP (who I think should do better) and a solo experimentalist. I like Jack’s approach. He seems v. open to suggestions for improvement. That’s exactly as it should be. That’s how NASA got to the Moon (plus the tax dollars as well).

    • US_Citizen71

      Very wise. You make several good points. Without sharing it would be hard for improvement, so I hope no one is discouraged from sharing. But, if they do they should be open to constructive criticism and want to improve their process.

      • Obvious

        I don’t want to discourage sharing info. I want to encourage good, actionable data.

  • Alan DeAngelis

    I know an isotopic analysis of the ash would be expensive for Jack (and most of us) to do but that would be something that would give definitive proof of a LENR.

    • Axil Axil

      Did you not see this

      • Alan DeAngelis

        I saw this. “Yes, Klee Irwin is going to have some kind of
        analysis performed in terms of structure I believe (but not isotopic analysis).”

  • Obvious

    If tens or hundreds of people do a sloppy job, then all that effort still proves nothing. If two people do a proper calibration, and clean up uncertainties, and can say that they have definitely seen something that cannot be explained by some experimental uncertainties, then we have something.
    Your house has probably been heated hundreds of times the usual way, and you have an electric bill , gas bill, or wood pile to compare performance to. So you will know if it works. If you heated the house twice (ever) with wood, then tried your reactor once, would you be sure that there was an improvement? Maybe a window was open one of the times, but you aren’t certain which time, then what do you know? Maybe the house stores some heat, so for each test, it took a little less heating? Maybe your thermostat sticks at 72 degrees F until the temperature goes 10 degrees higher or lower, but nobody knows until it is tested. Maybe you wear a sweater for one test, and not note that? Maybe some of the wood is hardwood, and some is softwood. Maybe the sun heats the house randomly by variously open or closed curtains. Or the heat goes out the windows, due to open curtains when the sun isn’t shining. Etc., etc…

    • Skip

      Thanx for your reply O. We are in agreement, I’m sure.

      I said “proper Parkhomov replications” which, although redundant, means without scientific doubt and repeatable by following the specifics of the original experiment.
      I consider the term “replication” to be exact. Materials and methods. Otherwise it isn’t a replication.
      What my allegorical heater description intended, was to address that there will be more than one way to take our common interest and knowledge, and produce useful energy. A replication will provide proof (positive or negative, lol) of one of those ways.
      I will look for others.

      Oh, and although I don’t have a house (or anything close to it) I will do all the due diligence I am capable of before offering a clearly (to me) OU producer, to those I trust to examine; openly…

      • Obvious

        By replication, I generally mean conceptual replication. There really isn’t any one clear example of something that needs to be exactly replicated yet, that we have enough information for.

        There are lots of ways to tackle this problem. Many can be done on the cheap. But to be effective, they should be done well. As well as feasible, anyways. So folks that don’t have a lot of money, time, etc., actually get good information, so they stand a good chance at success without lots of dead ends, loose ends, and overall uncertainty.

        What my version of your allegory was to demonstrate that if one knew for certain that the windows were closed, the curtains closed, the wood was all the same kind of hardwood of same dryness, and weighed into batches, the chimney draft flow accounted for, the thermostat operations verified, you did have the same sweater on each time, the outside temperature was the same, the inside temperature the same each time at the start, and the fire or reactor was on for the same amount of time, and the reactor worked 150% better, plus or minus 2° because that is the accuracy of the thermometer, and plus or minus 10 oz of wood, because that is the cumulative inaccuracy in weighing the wood, plus or minus 1.3 kWh electricity because reading the meter was tricky. And these uncertainties affect the results by +5% to -8%, so the excess heat is at least 132% and possibly as good as 155%, so still a very strong result. And opening a window gives a -11% reduction in heat, so even with an open window the result is good. Putting a sweater on the thermostat keeps it from sticking, but it reads low by 3°, potentially underestimating heat by X kWh….etc.

        And doing it 10 times each showed an overall variation of 3% in wood burning measurements, and 4% variation in reactor running measurements. So when the window is left open once, you will know what an open window reads like a constant 11% decrease, although it could be explained by both low wood burning heat measurements combined with low wood fuel measurements, which should be commensurate if true, and if not and these values are in the middle to higher range then the open window can be determined to be the cause. That is the power of strong data.

        • Mats002

          May I suggest you add this text to the replication instruction, it seems most people, including professional scientists, are not aware of the importance of producing strong data.

          • Obvious

            Here’s a fun read.


            ….”While Karl Popper imagined falsification occurring with a single, definitive experiment—Galileo refuted Aristotelian mechanics in an afternoon—the process turns out to be much messier than that. Many scientific theories continue to be considered true even after failing numerous experimental tests. Verbal overshadowing might exhibit the decline effect, but it remains extensively relied upon within the field. The same holds for any number of phenomena, from the disappearing benefits of second-generation antipsychotics to the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings, second-generation antipsychotics are still widely prescribed,and our model of the neutron hasn’t changed. The law of gravity remains the same.”….

          • Mats002

            Fortunate, the E-Cat do not have to be intervjued about it’s expected excess heat. If that was the case I see a possible declining effect 🙂

  • I think the best I can say is that it supports the expected pattern from Rossi/Parkhomov. It is enough of a Yes for me to continue spending money on equipment and experiments. We’ll see what happens when I collect the heat with water.

    • bachcole

      Jack, let’s be honest. If you become one of the early replicators (I don’t mean in the “Star Trek” sense. That would be bad.), you will have very lucrative and fun employment for the rest of your life. (:->) Of course, if you number one motivation is curiosity, then that would be better for all concerned.

  • Alan DeAngelis

    Yeah Thomas, I agree. It does take a lot of
    resources to do a proper analysis.

  • Obvious

    The minimum ideal is this:
    I suggest that no less than 5 complete control reactors be built and tested. As similar as possible to each other. Keep testing one design until it is solid, then build four more just like it. Then build 5 more reactors Using the same design, loaded with fuel. Use someone outside the experiment to number or label them somehow, that knows which are loaded and which are not. This is kept secret until each reactor has been run at least 5 times each. 10 each would be even better, if they live that long. The reactors are to be run in random order, mixing the order each time. Take lots of measurements of the various parameters, and do it the same every time. Compile all info, Then re-connect the labels to the loaded or unloaded state. Then do basic statistics and see what you have.
    This is expensive, time consuming, but the right way to do this. This is why a standard, robust design is needed. Not much point in doing any serious experiments until the dummy can last at least 5 runs.

    • US_Citizen71

      I’m not sure a robust reactor core is the biggest problem. One could simply fill one of these: or something like it with fuel and encase it in alumina cement for a robust core. The big problem is the heater. No large manufacturers of micro-kilns exist as far as I know, so the SiC elements that MFMP are looking into probably are the way to go. Short of SiC, heavy gauge Kanthal is the next best heater I’ve seen. Solving the heater issue is going to paramount to getting good data.

      • Obvious

        I meant the reactor design generally. I’m still waiting for my quote on the the Superthal minis. It might have MoSi2 coils, so it could be a pain to start up. Most of these light up like a dead short, then gain resistance in a hurry as they heat up, so they need gently ramped start up current, then PWM.

        • Obvious

          Yep, them mini Superthals are big bucks. Roughly $2000.00