New MFMP Glowstick Test Underway (Update: Fueled Test Started on Jan 30th)

Alan Goldwater and Mark Jurich of the Martin Fleischmann Memorial Project has started up a new Glowstick 5 test which is running live from Santa Cruz California.

UPDATE (Jan 30, 2016): Alan Goldwater has informed me that a live fueled test has just begun, and can be followed at the links below.

The main difference between this test and the previous Glowstick 5 test is in the pre-treatment of the nickel. According to this document describing the experiment, here is how the nickel will be treated:

Pre-Baking the Ni powder at 200ºC for 1hour, cooled, then baked for another hour.

Heating under vacuum to 115ºC while under vacuum for several hours, to de-gas the contents.

Heating with H2 to reduce oxides and potentially ‘load’ some H2 into the cracks and crannies created by the pre-baking.

The experiment can be followed live at this link (including chat with the experimenters): http://magicsound.us/MFMP/video/

  • The problem is the missing stimulation.

    This test will also fail with high probability, if there is not accidently used a specific heater element frequency which is unwittingly stimulating the fuel.

    Maybe this is the point why some replications seem to work and next time with same setup fail.

    The problem was also mentioned by me356 here:
    Glowstick 5.2 Test series

    • Andreas Moraitis

      I think it is better to avoid implementing too many changes at once.

  • Ged

    Well guys, let’s rock this science! Glad we are testing how this particular variable affects (or doesn’t) the results.

    Edit: Just as a quick link reminder for everyone, here is the live data http://data.hugnetlab.com/ , under GS5 at the bottom.

    Should be easy to evaluate this test, as we have so much prior data for GS5. That’s the beauty of multiple N.

  • Ged

    Well guys, let’s rock this science! Glad we are testing how this particular variable affects (or doesn’t) the results.

    Edit: Just as a quick link reminder for everyone, here is the live data http://data.hugnetlab.com/ , under GS5 at the bottom.

    Should be easy to evaluate this test, as we have so much prior data for GS5. That’s the beauty of multiple N.

    Edit2: Also, the live doc, for those who didn’t see it on the quantumheat website or facebook page: https://docs.google.com/document/d/1SZT__Sb9hQSKdycoXyKramuZCAJz6G9UlS_0kB0dJo4/edit?pref=2&pli=1

  • Oystein Lande

    As Barty Says Below triggering and stimulus is a vital ingredient. I think there’s the failure of many replication attempts.

    To sum up what I’ve found & stated earlier on triggering;

    1. Brilliouin is using electrical stimulation on their Ni-H reactor.

    Ref. Mckubre stated on Brillouin:
    “The fact that the Q pulse input is capable of triggering the excess power on and off is also highly significant.”

    2. Swartz have discovered something interesting:
    “Astonishingly, it has now been discovered that high intensity, dynamic, repeatedly fraction- ated, magnetic fields have an incremental major, significant and unique, complex, metachronous amplification effect on the preloaded NANOR⃝R -type LANR device”

    “H-field pulse sequence was delivered (dH/dt ∼1.5 T with 0.1 ms rise time × 1000–5000 pulse”
    Ref. http://www.iscmns.org/CMNS/JCMNS-Vol15.pdf#page=73

    3. Rossi is using some kind of electromagnetic stimulation….

    I have tried to ask him several times on stimulation, but he allways says “no comment” or “confidential”

    A reason why Rossi will not comment on stimulation of the core may be because Piantelli allready have patented such mechanisms, as I refer to Below.

    So Rossi is using stimulation, but don’t want to talk about it, since he may “get Piantelli on his back”?

    The only thing I find in the Rossi patent claims are “reinvigorating” reaction by “varying” a voltage source.

    Which could mean varying AC voltage with some high frequency (at what Herz?), and thereby creating some extra stimulating magnetic fields….for “reinvigorating” the core…..

    But he can not state it in his patent, since it is allready protected by Piantelli.

    4. More on stimulation, this time from Piantelli patents:
    “……..impulsive trigger action consists of supplying an energy pulse”

    “…..trigger means (61 ,62,67) for creating an impulsive action (140) on said active core (18), said impulsively action (140) suitable for causing……”

    http://worldwide.espacenet.com/publicationDetails/claims?CC=EP&NR=2702593A1&KC=A1&FT=D&ND=&date=20140305&DB=EPODOC&locale=en_EP

    “………an impulsive application of a package of electromagnetic fields, in particular said fields selected from the group comprised of: a radiofrequency pulse whose frequency is larger than 1 kHz; X rays; v rays; an electrostriction impulse that is generated by an impulsive electric current that flows through an electrostrictive portion of said active core….”

    “- an electric voltage impulse that is applied between two points of a piezoelectric portion of said active core; an impulsive magnetostriction that is generated by a magnetic field pulse along said active core which has a magnetostrictive portion.”

    “Such impulsive triggering action generates lattice vibrations, i.e. phonons…”

    http://worldwide.espacenet.com/publicationDetails/description?CC=WO&NR=2010058288A1&KC=A1&FT=D&ND=3&date=20100527&DB=worldwide.espacenet.com&locale=en_EP

    5. Bockris: “It is interesting that the excess heat, caused by RF stimulation, reaches a maximum value and, after a certain time, falls to zero. A possible explanation is that the RF stimulates only the deuterium nucleus at the near surface of Pd. It is well known that, due to the ‘skin effect,’ high frequency alternating currents are felt only up to a certain depth (called ‘skin depth’) ”

    Ref. Bockris et.al:
    http://www.lenr-canr.org/acrobat/BockrisJtriggering.pdf

    6. Mckubre et. al paper:
    http://lenr-canr.org/acrobat/McKubreMCHtheneedfor.pdf

    • Mats002

      Absolutely convincing arguments, totally agree Oystein.

      MFMP seams to do the replication step by step, understanding each parameter at a time. Sharp EM stimulation should be the last part of the parameter space to explore, they are just not there yet.

      Preparation of the fuel is the second last parameter which we se running now. Our friend Freethinker was first with preparation like MFMP do now and he got clear gamma signals which should not be there if only chemical reactions occurs in the core. At that point Freethinker disappeared to earn some money which I hope was successful.

      • Albert D. Kallal

        No question the EM stimulation is a big deal. On the other hand, dropping a strong permeant magnet on top of the device might also do the trick. The magnet is NOT an EM stimulation, so I don’t want to stray off topic – but it is a “easy” to try!

        Has any of the replications seen excess heat WITHOUT some kind of EM stimulation? I think this is the “big” question.

        Regards,
        Albert D. Kallal
        Edmonton, Alberta Canada

        • This could work, but I think you have to move the magnetic field to have a continuous effect.

          • Albert D. Kallal

            Actually, I mentioned the permanent magnet, because of what that professor from MIT (Halgonson? ) who was teaching the cold fusion 101 course. They found that attaching a permanent magnet to their LENR nanors significantly increased output. So some evidence exists that a magnetic field can enhance the LENR effect – a different issue and story from the EM – but never the less a very interesting effect occurs on those LENR nanors when a permeant magnet is used.

            Regards,
            Albert D. Kallal
            Edmonton, Alberta Canada

          • Okay, it’s worth a try. Would be very easy to do anyway 🙂

          • Warthog

            Hagelstein.

          • RLittle

            Thanks.

        • Mats002

          There have been some anomalies of excess heat at times for several of the live open (broadcasted) experiments but not in any way conclusive, not over 20% (COP > 1,2) what I can recall. Some of the experiments showed a weak radiation which can not be of chemical origin and I hope this run will at least show clear signals of radiation.

  • Ged

    Well, one interesting thing I’ve seen so far with this new pre-treated nickel is how far and fast the hydrogen pressure fell once the temp ramped up to 100 C, 100 PSI just sucked right up in just an hour and a half. Since the PSI is holding around 2, it doesn’t seem like it was a leak either. Such an early and vigorous point for absorption, dunno what it means if anything. But I would predict based on the pre-bake hypothesis that the baking removed trapped gasses and other films on the surface of the powder allowing much more room and access for the hydrogen to the nickel.

    • Sanjeev

      I see from the data at the beginning of the experiment that the pressure fell to almost 0 psi an hour before the heating begin. It did not change much after heating started. So I guess there was either a leak or something really strange happened.
      Edit : It was vacuumed before heating started, so that solves the question. The initial pressure was 14 psi.

      • Ged

        Oh whoops, added a factor of 10 to the psi picture I was looking at on accident. So 12 PSI absorbed it seems. Interesting! Who knows what could happen now with the pre treatment scheme they devised.

  • AdrianAshfield

    Some script has stopped working and the display is frozen, using Firefox.

    • Bob Greenyer

      Try again on a different browser. For me – it does not work at all on Chrome for Mac – but fine in Safari.

  • AdrianAshfield

    Some script has stopped working and the display is frozen, using Firefox.

    • Bob Greenyer

      Try again on a different browser. For me – it does not work at all on Chrome for Mac – but fine in Safari.

  • Dr. Mike

    A regimented pretreatment of the Ni should help in getting consistent results in LENR experiments, especially since it is most likely that reactions at the Ni surface are a critical factor in LENR. In fact, any experiment designed to evaluate the effect of the starting Ni powder, such as comparing nickels from the same vendor, but having different particle sizes, should begin with identical pretreatments of the Ni powders. However, I have to agree with Bartly’s and Oystein’s comments below that suggest that a more critical factor in achieving consistent results in LENR experiments is to optimize the EMF pulses supplied to the reactor. In my post from May 7, 2015 I recommended building a reactor with a separate winding for optimizing the pulses supplied to the reactor (although pulses superimposed on dc for heating with a single winding would also work). I believe that the statement from the Introduction of the Lugano Report: “In addition, the resistor coils are fed with some specific electromagnetic pulses.” should be heeded more carefully by LENR experimenters.

  • Dr. Mike

    A regimented pretreatment of the Ni should help in getting consistent results in LENR experiments, especially since it is most likely that reactions at the Ni surface are a critical factor in LENR. In fact, any experiment designed to evaluate the effect of the starting Ni powder, such as comparing nickels from the same vendor, but having different particle sizes, should begin with identical pretreatments of the Ni powders. However, I have to agree with Bartly’s and Oystein’s comments below that suggest that a more critical factor in achieving consistent results in LENR experiments is to optimize the EMF pulses supplied to the reactor. In my post from May 7, 2015 I recommended building a reactor with a separate winding for optimizing the pulses supplied to the reactor (although pulses superimposed on dc for heating with a single winding would also work). I believe that the statement from the Introduction of the Lugano Report: “In addition, the resistor coils are fed with some specific electromagnetic pulses.” should be heeded more carefully by LENR experimenters.

  • If they added carbon powder and iron dust to the fuel mixture, it might work.

    • Bob Greenyer

      We have considered and even proposed carbon, but there maybe some issues (which may help!) – it may create Nickel TetraCarbonyl, which is very toxic.

      • Axil Axil

        If the powder included iron or the reactor was made of iron, would the iron combine with the oxygen over the temperature of 500C to keep the oxygen out of the chemical reaction? just wondering.

        In the Lugano test, there was substantial amounts of carbon in the fuel load. Why did that fuel not produce Nickel TetraCarbonyl since Rossi or the testers was not harmed by the fuel load.

        • Ted-X

          Moisture in the air decomposes nickel tetracarbonyl very fast. The quantity of this material could have been very small and the material was probably lost very quickly on opening of the reactor due to air drafts or a fumehood. Otherwise it would kill the people opening the reactor.

      • Ecco

        Carefully exploiting Ni(CO)4 formation might be able to produce chemical vapor deposition of Ni inside the cell.

        • Bob Greenyer

          Yes, and this is why we considered it. It would need careful handling in takedowns.

        • Ted-X

          Temperature fluctuations could produce Ni-nanoparticles and Fe-nanoparticles in the reactor.
          ———————————————————
          Bob Greenyer, use a fumehood as carbonyls of Fe and Ni are extremely toxic (but they decompose in about one minute in the air, as they react with the water/moisture).
          ———————————————————–
          I also suggested previously cryogenic treatment of nickel (or cryogenic milling) as a pre-treatment of nickel.
          I believe that the liquid nitrogen treatment can make a difference.

  • Stephen

    I really like the streaming video, very clear and easy to follow. I especially like being able to see an follow the comments during the test run. Great.

  • Stephen

    I really like the streaming video, very clear and easy to follow. I especially like being able to see an follow the comments during the test run. Great.

  • US_Citizen71

    A bit off topic, but might make some interesting reading while waiting on the experiment.

    I did a Google search for “Proton–lithium-7 fusion” and one of the links lead me to this patent: http://www.google.com/patents/US20090274256

    which led to this patent https://www.google.com/patents/WO2014189799A1

    and eventually a discussion on http://www.lenr-forum.com/forum/index.php/Thread/1634-Unifiedgravity-New-player/

    This topic may not be new to those who spend time on lenr-forum.com but it is to me. Their claims seem to match up with the E-Cat X.

  • Valeriy Tarasov

    Just an idea about the possible way to get voltage – direct electricity from LENR device, i.e. from E-cat like devices. If you have magnetic field (from permanent magnet) in the volume of the LENR fuel, and you have moving protons, alpha particles (after lithium fission) (positive charge) and electrons (negative charge) in this fuel (plasma), then you will have spatial separation of charges by in magnetic field – i.e. voltage, i.e. direct electricity.

    If the above idea is correct and if Rossi had magnetic stimulation LENR, then voltage was a side affect of the E-cat.

    • Bob Greenyer

      Thanks for your input.

    • RR
    • Ophelia Rump

      The magnet will lose it’s magnetic properties as it heats, the poles will cease to line up.

      • Valeriy Tarasov

        Above is an idea in principle. Magnetic field can be controllable. Technically, a solenoid with DC can provide not only heating for fuel, but also the constant magnetic field at any time necessary for LENR stimulation.

    • US_Citizen71

      I like this hypothesis for DC electrical production the best:

      “[0126]

      When a proton fuses with a lithium nucleus, the result is the temporary creation of a beryllium ion. The increased charge of the beryllium ion is sufficient to capture an additional electron present in the conduction band of the lithium target. The beryllium nucleus then splits into two energetic helium ions that travel in opposite directions and leaves a total of four free electrons with forward momentum as imparted by the former beryllium nucleus. The momentum imparted to the electrons enables the electrons to randomly walk through the lithium foil in the same way as the helium ions and be collected in the Faraday Cup.

      [0127]

      With half of the helium ions each having a double positive charge and four electrons with a quadruple negative charge collected in the Faraday Cup, a negative current double the proton beam current should be detected when 100% fusion efficiency is achieved. This was the case during test #8 in which a negative current close to double the proton current was measured but never exceeded.

      [0128]

      The Faraday Cup during test #8 detected a measurable DC current in the high −μA range throughout the entire 91 minute period of proton impingement with two negative current plateaus that were close to double the proton current.” – http://www.google.com/patents/US20090274256

  • bachcole

    I did my “Zzzzzzzzzzzz” insult with the latest Randall Mills doze-off. But I will not do it with MFMP because they are much more polite and respectful and I think that they are headed the right direction and could get results any day now. But, please, wake me when they have real results. (:->)

    • Bob Greenyer

      Sure thing, what’s your phone number?

      • bachcole

        I’m glad that we are still friends. (:->) When, not if, you get results, I will be jumping for joy and telling everyone. You would be the most and an utterly reliable replicator.

  • Axil Axil

    In the Lugano report, there was at lease one nickel 100 micron particle present in Rossi’s fuel. The Rossi patent also states that his pretreatment results in nickel particles that are between 1 and 100 microns in size. Is the pretreatment of the fuel in this experiment with temperatures no more than 200C expected to sinter the fuel to match the particle size profile that Rossi uses? Before this test, was its fuel analyzed to see if the pretreatment of the fuel produce a fuel particle size profile equal to that of the Rossi patent? If this check was not done, why was it not done?

    • Ecco

      If internal reactor temperature was > 1400°C, it was all liquid/melted anyway and actual particle size shouldn’t matter in Lugano’s case.

      • Axil Axil

        Your assumptions are reflected in this comment. But be pleased to consider a collection of nickel particles entrapped randomly on a foam structure that is solid at elevated temperatures. This entrapment might keep the particle solid or at least will maintain particle sizes even when the nickel is melted and held in place on the foam.

        The same melting process could also be happening on the surface of the steel fuel holder in the wafer.

        I have made many mistakes when looking at the Lugano report with regards to the characterization of the fuel and ash. I have just found another mistake.

        The fuel must have melted because the particle 1 on page 45 is huge at about 600 microns wide and at least that size long or longer.

        The huge nickel ash particle looks in the micrograph to have lost the feathery nanometric lithium coated surface morphology that appears on the 100 micron fuel particle.

        It is hard to believe that such a huge particle at 13 milligrams could be pure Ni62. There is little lithium on or inside that particle.

      • Axil Axil

        The five factors that might contribute to the formation of hydrogen Rydberg matter (HRM) are as follows:

        Electropositive catalytic activity (i.e. lithium, potassium, calcium oxide, rare earth oxides), The low work function of this material seems to be important in HRM catalytic activity. This includes graphite (http://arxiv.org/pdf/1501.05056v1.pdf)

        In the Lugano report, there was a coating of rare earths on the nickel fuel particles. This might be related to reducing the work functions of the nickel particles as a result of rare earth oxides in the fuel.

        High pressure produced by flaws in the crystal structure of metal (i.e. nickel)

        Electrostatic field amplification produced by elongated and sharp nanostructures.

        Hexagonal crystal structure that provides a quantum mechanical template for HRM formation.

        A long timeframe – this speaks to the fact that HRM is driven by probability causation similar to radioactive decay.

        Once HRM is formed, it remains active for a long time if it is kept inside the reactor core using containment produced by a magnetic material.

        • Obvious

          Please elaborate on the idea that there are REEs coating the nickel particles.
          How do you come to that conclusion?

          • Axil Axil
          • Bob Greenyer

            The *GlowStick* 5.2 has been temperature cycling whilst Alan sleeps – and it is producing lovely data – as the temperature is dropped the pressure rises and vice versa… quite counter intuitive … is it a reversible reaction going on?

            Pressure trend is upwards, so may bleed and continue like this for a while.

          • US_Citizen71

            I do know that bleeding it is a good idea or bad idea, but. I just looked at the 12 hour view of the graph on hugnet and it appears that as the pressure has risen the delta between the active and null has narrowed.

          • Bob Greenyer

            It does look like that yes

          • Obvious

            Looks more like a mostly lithium particle, but I see where your idea comes from. Some of these could be small molecules, for example mass 156. I’m not sure what that would be.
            However, a dash of Pr might make some sense since it does have some properties that could be useful. At least it isn’t very expensive anyways.

          • Axil Axil

            Isotopes

            156Gd 155.922118 (4) 20.47 (9)

            Natural gadolinium is a mixture of seven isotopes, but 17 isotopes of gadolinium are now recognized. Although two of these, 155Gd and 157Gd, have excellent capture characteristics, they are only present naturally in low concentrations. As a result, gadolinium has a very fast burnout rate and has limited use as a nuclear control rod material.

            Properties

            As with other related rare-earth metals, gadolinium is silvery white, has a metallic luster, and is malleable and ductile. At room temperature, gadolinium crystallizes in the hexagonal, close-packed alpha form. Upon heating to 1235°C, alpha gadolinium transforms into the beta form, which has a body-centered cubic structure.

            The metal is relatively stable in dry air, but tarnishes in moist air and forms a loosely adhering oxide film which falls off and exposes more surface to oxidation. The metal reacts slowly with water and is soluble in dilute acid.

            Gadolinium has the highest thermal neutron capture cross-section of any known element (49,000 barns).

            Uses

            Gadolinium yttrium garnets are used in microwave applications and gadolinium compounds are used as phosphors in color television sets.

            The metal has unusual superconductive properties. As little as 1 percent gadolinium improves the workability and resistance of iron, chromium, and related alloys to high temperatures and oxidation.

            Gadolinium ethyl sulfate has extremely low noise characteristics and may find use in duplicating the performance of amplifiers, such as the maser.

            The metal is ferromagnetic. Gadolinium is unique for its high magnetic movement and for its special Curie temperature (above which ferromagnetism vanishes) lying just at room temperature, meaning it could be used as a magnetic component that can sense hot and cold.

          • Obvious

            The 141 and 156 peaks could also be caused by dimethylnaphalines, which may be from the glue.

          • Axil Axil

            Before analysis, the particle was cleaned of contaminants using sputter cleaning for 180 seconds. The glue residue would have been cleaned off the particle. The heavy material that we are concerned about was welded into the nickel by the fuel preprocessing method. The particle may have been covered in a coating of heavy elements and then sintered by the application of an electric arc which welded the heavy element coating onto the surface of the nickel.

    • Bob Greenyer

      Bob Higgins expects there will be some sintering at this temperature – and perhaps this will result in a more varied size distribution. It is about temp and time. Reducing the nickel may also increase its propensity to sinter at lower temeperatures.

      • Axil Axil

        The melting of nickel points to the need to have a solid fibrous substrate that supports the nickel in spots throughout the volume of the reactor core. Think paint.

        In Rossi’s wafer, the solid substrate is steel. When nickel melts, nickel is painted onto the steel over its surface along with the other components of the fuel.

    • magicsnd1

      Axil, the Ni powder being used is Hunter AH50, which has been thoroughly analyzed in previous experiments. To examine it at each stage of treatment is beyond my available resources. But loan me your SEM (or donate funds to buy one) and you’ll get what you asked for.

      • Axil Axil

        All that is needed to provide a good deal of Lugano capability is a relatively low resolution microscope to check that the particles are sintered to produce a size profile of 1 to 100 micro particles. The Rossi patent says that the small particles are produced by exploding 5 micron particles. We can infer that the sintering is consistent with what Rossi has done by looking at the results of the MFMP fuel sintering process. Transmutation products on the fuel particles are related to how the sintering is done. That is interesting to know but you can only do so much with your equipment on hand,

        • magicsnd1

          I’ll have a look when I unload the Nickel (soon). My optical microscope can resolve to ~ 5 microns, and any sintering should be visible with that.

          • Axil Axil

            Please be patent with me.

            Is what you are going to look at fuel or ash? I am referencing to fuel particle inspection after pretreatment and before use in the test run..

          • magicsnd1

            Your suggestion was a good one Axil. The image above is the Hunter AH50 Ni only, after baking and treatment in H2 at ~150 C.

          • Axil Axil

            It seems to me, our goal is to replicate the look of the particles that we see in the Lugano report. The micrograph that you show here does not show the fine featherly like surface that appears in the Lugano fuel analysis. Maybe sintering at a higher temperature might be called for.

          • magicsnd1

            An optical microscope can’t resolve the fine surface details. That’s why I asked for loan of a desktop SEM.

            The features you refer to are typical of Carbonyl Nickel like the Hunter AH50 I’m using. Here’s an image of the powder (on the left), posted by Bob Higgins in May 2014

        • magicsnd1

          Here’s an image of the Ni powder after the loading test. There’s little evidence of sintering in the powder. The range of particle size ~1-30 microns is similar to the unprocessed Ni.

          The small needles visible are contamination from the tissue wrapping in the box of microscope slides. Next time I’ll use an air duster first….

  • Axil Axil

    In the Lugano report, there was at lease one nickel 100 micron particle present in Rossi’s fuel. The Rossi patent also states that his pretreatment results in nickel particles that are between 1 and 100 microns in size. Is the pretreatment of the fuel in this experiment with temperatures no more than 200C expected to sinter the fuel to match the particle size profile that Rossi uses? Before this test, was its fuel analyzed to see if the pretreatment of the fuel produce a fuel particle size profile equal to that of the Rossi patent? If this check was not done, why was it not done?

    Since this test is a Rossi replication attempt, should it not produce a fuel load as close as possible to the Lugano test?

    • Ecco

      If internal reactor temperature was > 1400°C, the fuel was in liquid form during operating conditions, which means that actual particle size shouldn’t matter in Lugano’s case. Besides, it was apparently found to be encrusted on the inner walls upon extraction, according to Cook’s paper posted on Arxiv.

      • Axil Axil

        Your assumptions are reflected in this comment. But be pleased to consider a collection of nickel particles entrapped randomly on a foam structure that is solid at elevated temperatures. This entrapment might keep the particle solid or at least will maintain particle sizes even when the nickel is melted and held in place on the foam.

        The same melting process could also be happening on the surface of the steel fuel holder in the wafer.

        I have made many mistakes when looking at the Lugano report with regards to the characterization of the fuel and ash. I have just found another mistake.

        The fuel must have melted because the particle 1 on page 45 is huge at about 600 microns wide and long being at least that size long or longer.

        The huge nickel ash particle looks in the micrograph to have lost the feathery nanometric lithium coated surface morphology that appears on the 100 micron fuel particle.

        It is hard to believe that such a huge particle at 13 milligrams could be pure Ni62. There is little lithium on or inside that particle.

        When nickel is melting, it seems to be important to keep the nickel separated and distributed in space. It would be bad if the nickel formed a single pool of material at the bottom of the alumina tube.

    • Bob Greenyer

      Bob Higgins expects there will be some sintering at this temperature – and perhaps this will result in a more varied size distribution. It is about temp and time. Reducing the nickel may also increase its propensity to sinter at lower temeperatures.

      • Axil Axil

        The melting of nickel points to the need to have a solid fibrous substrate that supports the nickel in spots throughout the volume of the reactor core. Think paint.

        In Rossi’s wafer, the solid substrate is steel. When nickel melts, nickel is painted onto the steel over its surface along with the other components of the fuel.

    • magicsnd1

      Axil, the Ni powder being used is Hunter AH50, which has been thoroughly analyzed in previous experiments. To examine it at each stage of treatment is beyond my available resources. But loan me your SEM (or donate funds to buy one) and you’ll get what you asked for.

      • Axil Axil

        All that is needed to provide a good deal of Lugano capability is a relatively low resolution microscope to check that the particles are sintered to produce a size profile of 1 to 100 micro particles. The Rossi patent says that the small particles are produced by exploding 5 micron particles. We can infer that the sintering is consistent with what Rossi has done by looking at the results of the MFMP fuel sintering process. Transmutation products on the fuel particles are related to how the sintering is done. That is interesting to know but you can only do so much with your equipment on hand,

        • magicsnd1

          Here’s an image of the Ni powder after the loading test. There may be a little evidence of sintering in the powder. It’s not possible to distinguish sintering from the clumping typical of particles this size.The range of particle size ~1-30 microns is similar to the unprocessed Ni.

          The small needles visible are contamination from the tissue wrapping in the box of microscope slides. Next time I’ll use an air duster first….

  • Andreas Moraitis

    I do not understand the purpose of this experiment. Wouldn’t one expect a different (possibly much higher) ab-/adsorption rate if hydrogen is provided in its atomic form (as is the case in lithium hydride disintegration or electrolysis)? The loading ratio for molecular hydrogen should be irrelevant if hydrogen is produced in situ (or split up before it is used).

    • Ecco

      Molecular hydrogen splits upon adhesion (adsorption) to the metal’s surface, and penetrates the lattice (absorption) in atomic form. The amount of soluted atomic hydrogen inside the lattice depends on the metal itself and both pressure and temperature. Some factors (impurities, substrate if present, structure, etc) can affect this.

      http://www.intechopen.com/source/html/40231/media/image5.png

      • Andreas Moraitis

        Yes, but the question is if nickel can do the job at these low temperatures.

        • Ecco

          It can, in bulk Nickel even:

          http://i.imgur.com/OEslqLw.png

          GS5.2 showed a threshold/activation temperature, which makes me suspect that something else was involved than just absorption of H in pure Ni, though.

          • Andreas Moraitis

            However, one might expect that atomic hydrogen would work much better, since the energy that is required to break the bonds (4.5 eV per molecule, which is a lot) could be saved.

          • Ecco

            I am not able to provide a complete explanation for this, but as far as I am aware of, the energy barrier for H2 bond dissociation is greatly reduced on the surface of transition metals.

          • Bob Greenyer

            This experiment, as detailed in the live document, is primarily about processing the Nickel in the ways put forward by Rossi, Parkhomov, Brillouin and Piantelli.

            1. Heat treatment to remove water and maybe cause micro-explosions.
            2. Gentle heat treatment and vacuum to de-gas
            3. Gentle heat and Hydrogen to reduce oxides + vacuum
            4. Heat with Hydrogen and observe

            Additionally it is designed to understand how much H2 could be taken out of a volume in the cell – some of which may be being captured by the nickel powder.

            Subsequently, this “prepared” nickel will have other fuel elements added.

            These are claimed important steps that have not been undertaken by many attempted replications.

          • Albert D. Kallal

            Good stuff! I think this another important test and step along the way!

            The fact that you doing this and sharing this in public eye means you don’t have a lot of clothing on! to cover things up!

            You are to be commended for this work and effort – and especially sharing this quest and adventure in public!

            Regards,
            Albert D. Kallal
            Edmonton, Alberta Canada

          • Warthog

            In Pd, hydrogen ONLY loads into the lattice as atomic hydrogen (IOW molecular hydrogen splits into its two component hydrogen atoms). I think it is safe to assume that nickel works similarly)

          • Andreas Moraitis

            It might be that while Pd experiments realize a ‘3D’ system, Ni systems operate rather in ‘2D’ (that is, on the surface). In addition, Pd-D seems to work better than Pd-H, and Ni-H better than Ni-D. One could speculate that the different spin types of H/D atoms or nuclei are relevant in this context, especially if collective effects are involved.

          • Warthog

            Not to be “Clintonian”, but it depends on your definition of “2D” vs “3D”. I think at the atomic level we are talking about, “everything” is ultimately 3D. There is plenty of data that says that nanostructure of a particular geometry is necessary, but the effect seems to need a “rough” rather than “smooth” surface. Ed Storms thinks the key feature is “microcracks”. I suspect that ultimately, Pd-D, Pd-H, Ni-H, and perhaps even Ni-D will have different “optimum” 3D nanostructures/NAEs (nuclear active environments) for maximum output.

            AFAIK, only George Miley (Leneuco) and Mitchell Swartz (Jet Energy/Nanor) are actively pursuing an understanding of both Pd AND Ni (and varying mixtures of each) systems.

      • Roland

        What worked at SPAWAR was the co-deposition of palladium and hydrogen when electroplating very high purity palladium onto the anodes. For all intents and purposes the process pre-loaded the palladium lattice with hydrogen, and as soon as the plating was a few atoms deep the test cells would produce LENR every time.

        Unfortunately the current problem’s complexity is, a least in part, probably a reflection of the multi-element fuel as the potential variables in play threaten to turn exponential; as opposed to the relative simplicity of a two body (element) problem.

        Based on the available information this simplicity and reliability came at the expense of every other desirable property sought after in a power source.

        • Ecco

          As I’ve written in a comment on quantumheat.org on exactly this matter:

          Loading might only be an indirect measurement of the suitability of a material for generating excess heat later on, or in other words more like a necessary but not sufficient condition. It might be apparent and caused by the formation of cracks/voids in the material (and adsorption of hydrogen therein), or enhanced through spillover absorption by other elements perhaps inadvertently or unexpectedly introduced in the system.

          Piantelli’s active samples always used some sort of treatment for increasing the active surface, including acid etching, electrodeposition, and so on.

      • Magnetostriction/electrostriction of the nickel could conceivably increase speed and extent of diffusion of H by agitating the nickel lattice.

  • Valeriy Tarasov

    No doubts, Parhomov-like experiment should be tested. But. For the next step, I just would like to remind the better design of the LENR device – “inside out” scheme (I have suggested it here more then one year ago). Such design is the same as in the Rossi’s patent. It fits better not only for heat exchange (to avoid local overheating in Parhomov device) but also for electricity generation (voltage between the 50 an 52 layers (and the symmetrical ones below) of steel in Rossi patent. Heater and solenoid with DC for magnetic field generation in the layer 40 of the patent).

    • Bob Greenyer

      Agreed

  • magicsnd1

    I’m now running a calibration with Alumina powder in both sides of the cell and H2 starting at 1 atm. The pressure behavior will tell us whether any of the H2 went into cell components other than the Ni in yesterday’s test. The live data stream is at http://magicsound.us/MFMP/video/

    • Mats002

      Hi Alan, what became your conclusion? How much H2 went into the Ni?

  • magicsnd1

    I’m now running a calibration with Alumina powder in both sides of the cell and H2 starting at 1 atm. The pressure behavior will tell us whether any of the H2 went into cell components other than the Ni in yesterday’s test. The live data stream is at http://magicsound.us/MFMP/video/

    • Mats002

      Hi Alan, what became your conclusion? How much H2 went into the Ni?

  • Bob Greenyer

    This test is showing the importance of heat+vacuum de-gassing, something we did a lot of during Celani replications and Piantelli says it is entirely necessary, but most replication attempts appear to have overlooked this possible deal-breaker.

    • Ged

      It stands to reason that if other gasses are bound to the nickel, it could impede, block, or chemically react with and nullify most or all of the hydrogen gas. Some absorption in such dirty metals could just be hydrogen chemical capture and deposition, rather than metal lattice interaction and NAE site filling.

      I guess once we do a full run we’ll know. But, even so, the data so far is really encouraging in regards to importance of the pre-treatment. The protocol is very nice, and seems sound (though some time spans still need defining in the live doc, and expressing baking/vacuuming time as a function of the mass of the nickel would be very useful).

      • Bob Greenyer

        Hi Ged, good idea…. Perhaps you could capture that from the data on HugNet and submit it for inclusion into the live doc.

    • Ted-X

      Bob, I suggest to do a cryogenic pre-treatment of nickel. I think that it will bring a breakthrough in the experiments. Just my two cents based on intuitive thinking and supported by my friend with his crystal ball 🙂

      • Bob Greenyer

        Thanks – like freeze thaw treatment?

        • Stephen

          This sounds like a very good idea from Ted-X water (or Moisture) expands when it freezes and has well know effects in breaking up materials in Geology because of this.

          Perhaps this cryogenic cycling could be a good way to generate micro cracks with in the particles. This maybe useful for LENR
          a) if Ed Storms ideas on micro cracks generating NAE is correct. More micro cracks may mean more NAE.
          b) if evanescent waves are important, many micro fractures maybe more likely to produce surfaces with the right orientation for the evanescent waves to form.
          perhaps some other possible reasons too.

          I suppose this cryogenic cycling should be performed on the material before the heat treatment so that the water is still present? Or I wonder if other volatiles need to be removed first so only water is present? Perhaps several or many cryogenic cycles could be beneficial.

          I wonder if his friends Crystal ball is an ice crystal 😉

          • Gerald

            Why not test both? If the test your now preforming doesn’t produce resuld, try and shock the crystal structure in place with nitrogen and test te active cell again. If it doesn’t work first shock the material and then do the preheating sequence. I know, a lot of work said in one sentence. Great work you guys are doing!!

          • Bob Greenyer

            Great data coming from the GS5.2 Just look at this Chart. Huge and abrupt pressure drop as temperature raised, presumed to be free Lithium taking up a new equilibrium state with Hydrogen. Then raise the temperature again… and the pressure goes back up – then drop it back down again… and… the pressure goes UP!

          • Andreas Moraitis

            Maybe the Li-Al alloy solidified so that dissolved hydrogen had to be released.

          • Bob Greenyer

            Possibly – already learning so much from this experiment

            meanwhile… it looks like we may have around 10% excess (and rising) in the Celani wire in the MFC (cautious note at this stage)

            https://drive.google.com/folderview?id=0B9qCtGOFmvhmMHVBY3QwQnBMeUk&usp=sharing&tid=0B9qCtGOFmvhmTDBqMG5OejhUblU

          • Sanjeev

            And this is dT vs Active side temperature (Last 12 hours).
            There is something strange and chaotic going on at high temperature.
            (Attached)

          • Mats002

            Would be interesting to see that same diagram from a dummy run. If the dT is much lower in a dummy run then there is a need to explain the origin of the thermal noice when Li is present (dummy run is without LAH).

          • Sanjeev

            If by dummy run, if you mean the one done with only Ni and H2, then as far as I recall there was no excess, the null was always higher than the active.
            We need another control run with something like Iron powder+H2 to see if this excess reappears.

          • Bob Greenyer

            Control Run will happen tomorrow.

            Alan noted last night that the cell was running hotter at same input levels.

            “Point of reference, T_active is running about 25 degrees above the 600 watt cycles 2 Feb, and about 35 above the de-gassing on 26 Jan.”

            and that is at the lower temps.

          • Stephen

            Are there any particular tests planned today as well or will today be a rest day? Just curious if there will be more to follow 😉

          • Bob Greenyer

            I think Alan will try to take out the ‘fuel’ load and prepare for back end calibration.

          • Stephen

            The backend calibration and fuel analysis will be interesting, looking forward to it. This was a great test a lot of new information to think about.

          • Mats002

            Yes I mean the one with only Ni and H2, how high T did it go to? Can you make the same diagram (dT/T) from that run?

          • Sanjeev

            Actually the temperatures during that run never went higher than a few hundred, so it won’t be comparable. But please see Bob’s comment just below about 25 and 35C excess compared to other runs.

          • Bob Greenyer

            Imagine IF a burst of heat came from one side – there would be a small pressure increase and this would force potentially hotter H2 (Highest heat capacity) into the other side and this would ring between the two.

          • Ged

            Very nice graph, Sanjeev. It shows a lot. A surprisingly large absolute increase of >60 C if one takes the active sides calibration, or original (under the null) behavior!

            I am still fighting tooth and nail to get time, but I going to try doing total measured temperature (both sides added together) versus power, compared to calibration. No matter what is going on with the heater, the two sides must sum to the same temperature proportional to the power in, unless there is energy production making heat.

          • Mats002

            Hi Ged, what if the T goes up and down as in exo- and endothermal chemical reactions giving an average temp corresponding to power in?

            Would you say that such a scenario is energy neutral or is it an energy ‘cost’ for driving T up and down over time?

          • Ged

            Aye, that would be be cycling energy, or rather it would be in equilibrium and just oscillating over space, in which case it would be “energy neutral” for our purposes as out=in. Really, it would just be sum total = power in. No extra losses though, since energy is already being put in, so entropy is already being increased; that is as long as the material doing the oscillating doesn’t wear out, which would be seen as the oscillations dampening overtime and then ceasing (think like a pendulum given a starting push, where push is the fresh reactants with no wear or tear).

          • Mats002

            Well with that logic if the pendulum T up and down over time on average increase then energy is increasing and that is in fact what the HUG show in this experiment; dT between active and null on average increase. Do you agree?

          • Ged

            The dT of active compared to null is definitely averaged much higher than the calibration. dT was negative, but now positive by 20+ C, so the full change is bigger than just the “positive above null”. What I am going to look at is the sum total of both sides together, or simply active+null. I don’t have all the power data, but that will be the key to determining if there was more apparent, measured power out. Basically, doing what you are saying and looking at the average of the entire system.

          • Mats002

            OK. What if your analysis show an overflow of energy; what is the error margin of this setup? Alan and JustaGuy says 10 – 20% but that is (my understanding) a ballpark figure. How would you go about that?

          • Ged

            We actually have an advantage here we didn’t have before. We have the pulsing experiment, which gives multiple averages I can use. So, rather than ballparking, I can use statistical analysis. Really, to be proper, I need independent N (that is, an entire new run), but that is not always feasible for awhile. So, the consequence will be that this will be the run testing against itself, and thus vulnerable to internal systematic error. This is true for all experiments in all of science, which is why replication is essential. I could use previous GS data to give me a true higher N, but there is so much data to crunch and so little time.

            But in brief, I would go about it with stats. The “margin of error” is already encoded in statistics as the variance and its square root, standard deviation. So the statistics will tell us on their own what the margin or error is and if there is a significant signal above the noise. No need for ballparks, or football parks, or dog parks here; if I can manage it. These large datasets really challenge my computer.

          • Mats002

            In this case I don’t think a computer is the obstacle. The obstacle is gathering data from all previous GS runs and do the sum that you want.

            Can you describe what statistical analysis you need? Let’s ask MFMP for the data you need and crunch the numbers. What do you suggest?

          • Ged

            There are two tests: peak null versus peak active of experiment – calibration averaged from all technically successful runs (runs where something didn’t physically mess up, or where there is no known source of error), and a longitudinal test for any run that had multiple ramp ups/downs. The former is a simple Student’s t-test (or F-test for non-parametric if not normally distributed in error, but error should be a simple electronic fluctuation in the readings from the thermalcouples which would be normal), and the latter requires an ANOVA. I think only GS5.2 can be longitudinally investigated, which I am setting up for now.

          • Mats002

            I followed you up to the ‘longitudinal test’ – can you explain for a 10 years old please?

          • Ged

            Longitudinal just means over time ;). If you are following the values of some parameter over time, compared to a control parameter over time. This means the data is two dimensional: dimension one is the experimental parameter versus control, and dimension two is the time points from 0 to whenever they stop.

            The temperature pulses done in GS5.2 create very distinct breakpoints which allow me to treat each individual temperature hold as a discrete point of data in time. Since these march along in time, I can then do statistics on the -change- in the active compared to the -change- in the null side over time for each successive breakpoint.

          • Mats002

            How is the second (latter) part different from the first part – peaks of null and active over time?

          • Mats002

            dNull vs dActive over time is the latter? As in
            (Null[1] – Null[0]) / (Active[1] – Active[0])

            ?

          • Ged

            I may do that too ;). Though, if we do it that way, each N will be each time step. Hmm, that is a good idea though, Mats. See, there are so many ways to slice and dice the data.

            That equation idea there will answer a different question statistically though, that question being if the active -changes more than- the null over time. I’m interested in the total heat out versus power in, but I will also do your method as that fixes the problem I was having looking at how the active slowly changes over time while held.

            It’s a different question though, and so will be a different test, if I can manage to wrangle the data for it. No promises with such unwieldy datasets.

          • Mats002

            Time to hit the sack for me. No hurry. I can crunch the numbers if you support with the algoritm. Nighty for now!

          • Ged

            Rest well!

          • Ged

            If we take the average peak temperatures of the different GS runs, say GS3’s peak with GS5’s peak with GS5.2’s peak yesterday, we lose all time data. This is simply an N of 3 for max active side compared against the calibration of that active side. Or better yet, sum of null and active versus power compared with calibration’s sum of null and active versus power. This is flat data, there is no time at all, it’s a single number, like 2000 C/120 W +- 30 C/w versus 1600 C/120 W +- 20 C/W. The t-test will tell us if the means of the experimental runs averaged together compared to the means of the calibrations, given their error, is significantly different. That is, testing if there is a significantly greater amount of heat per Watt in the experimental compared to calibration, with an N of 3 independent GS runs.

            The latter is a single GS run over time, where each time point is the total average (maybe, haven’t completely decided how I want to handle the slow increase in the active side over time during the holds) of the hold temperature between the two breakpoints (ramp up, and ramp down). The N here is each time the same temperature is held, more or less, but really it’s a single experimental run (kinda like how global temperatures for the Earth are just a single run, with an N of 1 since we don’t have another Earth to create another N; but there’s still statistics and computer models and a whole bunch of time course science done on our single Earth N for global temperatures).

            Ideally, with the longitudinal test, I want to compare the sum total active+null/powerin against each successive time point at the same temperature hold (e.g. 3 different 1000 C holds) and against the calibration, where it exists for that temperature hold (thankfully the bookend calibration took it up to 1000 C, it looks like), which is time 0 more or less.

            I can still do this with active versus null, though, but it won’t be as robust a test as divided by power.

          • Mats002

            1. Average ((peak T) / (W at peak T))
            A) You list all GS runs to use
            B) I will give one number as the result +/- n

            2. Same average for the GS5.2 run

            Those two are to be compared – but what’s the error margin here?

          • Ged

            Error is the standard deviation that is calculated from the data. When you take the averages of points, you also take their standard deviation. The statistical tests then evaluate if the mean versus that variance is different enough to be not caused by chance. If you are using excel, you would use stdev on the same data you use average.

            Now, you can either take the single, highest point that each run had once, which is a single temperature reading and could be an unrepresentative outlier, or you could take a window pf points where temps were at the average max, using breakpoints to tell when that max is over (basically, a breakpoint is just where the percent difference between the points of some sliding window rise above a certain threshold. For the GS5.2 data, I have found a suitable threshold of 2% for data points separated by 23 seconds, for determining when the data is undergoing a breakpoint ramp). The trick with a window, is that when you create the average of the peak power, since it is all the data points over a certain time window hold, that average will itself have a standard deviation. Then when you average the averages, one has to take the a complicated sum of the square error/sum of the global mean error to get the right standard deviation for the actual averages of the independent runs.

            … So yes, it is easier to take the single point max, but it’ll be more accurate to take the average maximum to avoid outliers. But don’t worry about that, you can just grab the single maxes if you like, that make for a quick and easy first test :D.

          • Sanjeev

            Looking forward to your graphs.

          • Mats002

            I am thinking of the noice/error margin in this setup. The weakest part seams to be coil and TC physical changes over temp and over temp cycles (time). It would be nice to know the degradation behaviour over T and T(cycle) and what is the worst acceptable physical change? If both sides degrades physically and electrically even, it is acceptable for showing that one side have XH but not acceptable for calculate energy OUT. How much uneven degradation can be acceptable to show XH? Pre- and post reference runs can be used to find the window of degrading but those runs must go all the way to Max T.

          • Ged

            Calibrating on each T level is so important. Really can’t stress that enough. For best accuracy, we need a standard protocol for temperature holds, and both calibration and active runs must follow it. Time can differ for each hold, or even for the speed or ramps, but not the target T holds themselves. That would alleviate a lot of problems.

          • Bob Greenyer

            Seams to be repeatable / reversible – we bled some gas and it seemed to then have a higher equilibrium as would be expected.

          • Bob Greenyer

            it is there…

          • artefact

            Crossover 🙂 @12:29:30

          • Bob Greenyer

            Yep even on a 30s average in the high temp part of the cycle.

            The BEAMS HAVE CROSSED on the *GlowStick* 5.2

            What this means is to be determined but right now the trend is encouraging.

          • Mats002

            Well time to ask (again) for a voltage measure point in the middle if the coil that spans over both active and null. What if one half of the coil degrades and that this is the root cause of the temp shift? How to rule out this possibility?

          • Mats002

            Should be “in the middle of the coil”, thanks to my smart phone.

          • Bob Greenyer

            It could be degradation – or one side being hotter, causing marginal relative resistance shift and therefore power dissipation change – despite the overall power being the same and obviously equivalent current through all wire.

            Looking at the voltage over time may be interesting

          • Ged

            I am sure we would have seen that much earlier at the previous 1000 C holds where nothing changed. But power is power, so just gotta look at that and wire resistance per side over time.

          • Ged

            Looking at the long time trace… Active is rising but null is not decreasing. If wire sidedness was changing but overall power was the same, then the null would -have to change proportionally- to the active side. There is no way around this as it is a ratio of a known total.

            So, I don’t see evidence of that right now (power in would have to be increasing as -overall- combined temp has gone up). But we need to know the full power budget, then we can observe for sidedness ratio changes.

          • Ged

            This plus the previous plus the celani wire… We definitely have a serious phenomenon going on. Any rule outs shared between we need to do?

          • Bob Greenyer

            An overview of the cycles.

          • Ged

            Looks like the rising pressure (over all trend, as obviously it drops during heating cycles) may be related to the increasing active side while null stays static. Does suggest something is changing in the active side with the hydrogen loading equilibrium.

        • Stephen

          It does make me wonder if Alexander Parkhomov kept his samples outside in the Russian Winter?

          If it did increase micro fractures maybe it would also increase the surface area for Hydrogen to be adsorbed and also if cracks maybe more that are large enough to store Hydrogen in its required form (H2, Monatomic H or H- ions).

          I suppose Liquid Nitrogen would increase the shock in freezing and may freeze other substances with similar effect?

          Do you capture the released gasses in a bladder or something it could be useful to see if it was Hydrogen, Helium or H- or Rydberg matter or something? And perhaps even find a way to see its isotopes by reacting with other materials and then analysing their spectra? I think Axil had an idea that Rydberg matter would be trapped in a balloon after the H2 leaked out?

          At least if the balloon floats you know if it contains Hydrogen or Helium 😉

      • Axil Axil

        The goal of this experiment is to produce metalized hydrogen (AKA Hydrogen Rydberg Matter). This form of hydrogen seems to be the key to supporting the LENR reaction. I have written a post on the ways and means to produce this stuff.

        https://www.lenr-forum.com/forum/index.php/Thread/2717-Some-ideas-for-an-improved-Parkhomov-replication-not-a-replication-thread/?postID=12729#post12729

        • Andreas Moraitis

          Note that „a pressure approximately 1/4 of that required to metalize pure hydrogen itself“ would still be gigantic. Nevertheless, the lithium might help in some way or other.

          • Bob Greenyer

            Yes – not quite the centre of the earth pressures – but still quite absurd.

          • Axil Axil

            Holmlid produces HRM by using a weak laser beam. Never say never.

          • Bob Greenyer

            He does, yes.

          • Bob Greenyer

            Ok – settling at the low now – will be ready to kick on the fast rise imminently.

          • artefact

            Its glowing 🙂

          • Mats002

            It is kind of counter intuitive that the extreme high pressures needed for HRM to form is created in an atmosphere of low pressure, well below 1 atm. The ‘massage’ of H into Ni by heat cycling may produce HRM very locally. Actually – no one knows if HRM is what’s produced here but somehow a NAE (whatever that is) is formed IF we see higher radiation levels and/or excess heat later on in this experiment. So far so good.

          • Axil Axil

            It is likely that not just one method but a number of various methods taken together. For example, electropositive element alloying, the pressure amplification from fractured nickel lattice, and electrostatic stimulation might all work together to get to the parameters required for HRM creation.

          • Bob Greenyer

            Pressure take to a little under 0.5 bar now.

            This is a very controllable set up – so it is over to the crowd to start making suggestions as to what should be done. The first push through the LiH breakdown (900-1000) might be interesting – then cycling to re-do the reversible state might create some nice data.

  • Bob Greenyer

    This test is showing the importance of heat+vacuum de-gassing, something we did a lot of during Celani replications and Piantelli says it is entirely necessary, but most replication attempts appear to have overlooked this possible deal-breaker.

    • Ged

      It stands to reason that if other gasses are bound to the nickel, it could impede, block, or chemically react with and nullify most or all of the hydrogen gas. Some absorption in such dirty metals could just be hydrogen chemical capture and deposition, rather than metal lattice interaction and NAE site filling.

      I guess once we do a full run we’ll know. But, even so, the data so far is really encouraging in regards to importance of the pre-treatment. The protocol is very nice, and seems sound (though some time spans still need defining in the live doc, and expressing baking/vacuuming time as a function of the mass of the nickel would be very useful).

      • Bob Greenyer

        Hi Ged, good idea…. Perhaps you could capture that from the data on HugNet and submit it for inclusion into the live doc.

    • Ted-X

      Bob, I suggest to do a cryogenic pre-treatment of nickel. I think that it will bring a breakthrough in the experiments. Just my two cents based on intuitive thinking and supported by my friend with his crystal ball 🙂

      • Bob Greenyer

        Thanks – like freeze thaw treatment?

        • Stephen

          This sounds like a very good idea from Ted-X water (or Moisture) expands when it freezes and has well know effects in breaking up materials in Geology because of this.

          Perhaps this cryogenic cycling could be a good way to generate micro cracks with in the particles. This maybe useful for LENR
          a) if Ed Storms ideas on micro cracks generating NAE is correct. More micro cracks may mean more NAE.
          b) if evanescent waves are important, many micro fractures maybe more likely to produce surfaces with the right orientation for the evanescent waves to form.
          perhaps some other possible reasons too.

          I suppose this cryogenic cycling should be performed on the material before the heat treatment so that the water is still present? Or I wonder if other contaminants need to be removed first so only water is present? Perhaps several or many cryogenic cycles could be beneficial.

          I wonder if his friends Crystal ball is an ice crystal 😉

          • Gerald

            Why not test both? If the test your now preforming doesn’t produce resuld, try and shock the crystal structure in place with nitrogen and test te active cell again. If it doesn’t work first shock the material and then do the preheating sequence. I know, a lot of work said in one sentence. Great work you guys are doing!!

        • Stephen

          It does make me wonder if Alexander Parkhomov kept his samples outside in the Russian Winter?

          If it did increase micro fractures maybe it would also increase the surface area for Hydrogen to be adsorbed and also if cracks maybe more that are large enough to store Hydrogen in its required form (H2, Monatomic H or H- ions).

          I suppose Liquid Nitrogen would increase the shock in freezing and may freeze other substances with similar effect?

          Do you capture the released gasses in a bladder or something it could be useful to see if it was Hydrogen, Helium or H- or Rydberg matter or something? And perhaps even find a way to see its isotopes by reacting with other materials and then analysing their spectra? I think Axil had an idea that Rydberg matter would be trapped in a balloon after the H2 leaked out?

          At least if the balloon floats you know if it contains Hydrogen or Helium 😉

  • Stephen

    Hi Bob… Good luck with the GS5.2 test over the next days, it looks very well put together especially the calibration tests and analysis performed by Alan and Ecco. Will you be presenting some of the results here in ECW?

    The Celani Wire experiments and tests over the next days look really well put together and promising too. will the results from these also be presented here from time to time?

    Looks to be an interesting week ahead.

    • Bob Greenyer

      Yes and Yes – I will do my best. Thanks for your support and attention.

      Very interesting week!

      We are going to do something novel to attempt to trigger the Celani wire in the MFC (not that it needed much in the past.) At the right time, after loading, possibly on Monday, Mathieu Valat and Jean-Paul Biberian are going to enact our plan to discharge a capacitor bank through the Celani wire – a sort of Brillouin “Q-Pulse” on the cheap. We have been testing what the wire will comfortably cope with. This is taking learning from Piantelli (thanks again to all those that helped make that important trip possible)

      Also – the Celani wire in the dual Celani cells will be running with D2 whilst being observed for signs of Gamma.

      The GS5.2 will be drawing on so much that has been learned also over the past year – managing pressure – getting the ‘loading’ in before the free lithium melts – managing temperature carefully around the initial trigger attempt – keeping the pressure below one atmosphere when attempting to activate the reversible Li / Al / H2 reaction.

  • Stephen

    Hi Bob… Good luck with the GS5.2 test over the next days, it looks very well put together especially the calibration tests and analysis performed by Alan and Ecco. Will you be presenting some of the results here in ECW?

    The Celani Wire experiments and tests over the next days look really well put together and promising too. will the results from these also be presented here from time to time?

    Looks to be an interesting week ahead.

    • Bob Greenyer

      Yes and Yes – I will do my best. Thanks for your support and attention.

      Very interesting week!

      We are going to do something novel to attempt to trigger the Celani wire in the MFC (not that it needed much in the past.) At the right time, after loading, possibly on Monday, Mathieu Valat and Jean-Paul Biberian are going to enact our plan to discharge a capacitor bank through the Celani wire – a sort of Brillouin “Q-Pulse” on the cheap. We have been testing what the wire will comfortably cope with. This is taking learning from Piantelli (thanks again to all those that helped make that important trip possible)

      Also – the Celani wire in the dual Celani cells will be running with D2 whilst being observed for signs of Gamma.

      The GS5.2 will be drawing on so much that has been learned also over the past year – managing pressure – getting the ‘loading’ in before the free lithium melts – managing temperature carefully around the initial trigger attempt – keeping the pressure below one atmosphere when attempting to activate the reversible Li / Al / H2 reaction.

  • Bob Greenyer

    Story so far… After vac, heat and vac… Managing hydrogen evolution from 1st stage decomposition of LiAlH4 to 1bar by way of adjustment of needle valve. At around 135ºC internal temperature (a few degrees above Mossbauer determined Debye temperature for pure Nickel and our lowest bound for Debye). This is to allow H2 to ideally adhere and then split on the Nickel surface into mono-atomic H and be absorbed into the Nickel (possibly) without being kinetically knocked off again by high rate of high velocity H2 before absorption can occur.. We have headroom to 180.5ºC whereupon the passivated nanoshell free lithium will melt and start to combine with free Hydrogen also. This stage of the experiment is expected to take a while and we would like to proceed with caution as several claimed successful replication attempts have said they kept lower for longer.

    • Sanjeev

      Thanks for the nice brief.

      It looks like the pressure is being maintained by either pumping in some H2 or releasing it via valve. So how will we know if its getting absorbed and how much is being absorbed?

      • Mats002

        I guess you can’t because several things going on simultaniosly, LAH decomposition should increase pressure, “loading” should decrease pressure and the valve to the canister is slightly open which make the free volume unknown and possible some chemistry going on in there as well…

        • Bob Greenyer

          This setup should have allowed reduction of the nano shell passivated lithium and any remaining oxides on the Nickel before the free Lithium melted.

  • Bob Greenyer

    Story so far… After vac, heat and vac… Managing hydrogen evolution from 1st stage decomposition of LiAlH4 to 1bar by way of adjustment of needle valve. At around 135ºC internal temperature (a few degrees above Mossbauer determined Debye temperature for pure Nickel and our lowest bound for Debye). This is to allow H2 to ideally adhere and then split on the Nickel surface into mono-atomic H and be absorbed into the Nickel (possibly) without being kinetically knocked off again by high rate of high velocity H2 before absorption can occur.. We have headroom to 180.5ºC whereupon the passivated nanoshell free lithium will melt and start to combine with free Hydrogen also. This stage of the experiment is expected to take a while and we would like to proceed with caution as several claimed successful replication attempts have said they kept lower for longer.

    • Sanjeev

      Thanks for the nice brief.

      It looks like the pressure is being maintained by either pumping in some H2 or releasing it via valve. So how will we know if its getting absorbed and how much is being absorbed?

      • Mats002

        I guess you can’t because several things going on simultaniosly, LAH decomposition should increase pressure, “loading” should decrease pressure and the valve to the canister is slightly open which make the free volume unknown and possible some chemistry going on in there as well…

        • Bob Greenyer

          This setup should have allowed reduction of the nano shell passivated lithium and any remaining oxides on the Nickel before the free Lithium melted.

  • Andrew

    Good luck! Something worth a try would be AlNi@68% Ni. Melting point over 1600 C and a Ni surface area of over 100m^2/gram. Raney Nickel.

  • Andrew

    Good luck! Something worth a try would be AlNi@68% Ni. Melting point over 1600 C and a Ni surface area of over 100m^2/gram. Raney Nickel.

  • e-dog

    Fingers crossed! Good luck guys.

    • Bob Greenyer

      Thanks!

  • Bob Greenyer

    Alan has added a lot of experimental notes to the live document on the QH GS5.2 experiment page. He discusses H2 uptake ratio and fuel mixing procedure in an affordable disposable glove bag.

    At the current power step, the core temperature is straddling the 250ºC IH patent application claim of excess heat onset – so we will be looking out for an inflection as we make the next step up in input power.

    The bleed valve (into an intermediary vacuum chamber) is ensuring pressure does not get to high. If the free Lithium sucks all H2 up between 250ºC and 500ºC – then we can always add some H2 from our reserve tank in order to get to the wikipedia quoted 0.25bar for equilibrium on the reversible reaction that Rossi’s awarded patent claims is so important.

    Ecco notes that the experiment has been running mostly a little above calibration.

    • Andreas Moraitis

      I am afraid that releasing H2 is not a substitute for ad-/absorption or hydrogen cluster formation. With other words, low pressure does not equal low pressure. Good luck anyway.

      • Bob Greenyer

        No – agreed. Note – this is the same Nickel that had been put through rapid heating to remove ‘inherent water’ and cause ‘micro-explosions’ and that had then been seen to reduce the volume of H2 in the cell in previous steps in the GS5.2 experiment before it was ever mixed with other fuel elements. The details of these steps are in the live doc.

        In this chapter of the experiment – we are learning from Piantelli and Celani – deliberately keeping the pressure low to, in theory allow Ni surface catalysed dissociated H2 > 2H to be taken into the lattice/sub-surface without having too high-pressure of H2 that is suspected to otherwise kinetically remove 1H before it can be absorbed. There is no way in this chapter of the experiment to tell what proportion of the evolved H2 may be being ad/absorbed and neither was there intended to be.

        • Mats002

          Yes, and also Parkhomov was clear on that the pressure should be well below 1 atm to expect the phenomenon to show up. Free volume is an important parameter together with volume / weight of the fuel ingredients to get there. Also preparation of the fuel must be important but Parkhomov was not very detailed about that chapter of his experiments as I can remember.

    • Ged

      Go guys go! Been way too busy to follow along this week (working every day of the week, including today), but I’ll look at the data as soon as able, if Ecco doesn’t beat me to the punch.

      Excited to see how this turns out. A lot of new parameter tuning makes this especially interesting.

      • Mats002

        Welcome to your second job Ged, the fun is just to be started. Welcome to Alan too, up till now the experiment have been run by Justaguy very gentle and professional. I think pressure need to be lower though, but will it decrease by adding input heat? Parkhomov did it that way, what’s the plan mfmp? Also – somewhere along increasing temp some EM kick maybe?

  • Bob Greenyer

    Thanks!

  • Andreas Moraitis

    I wonder if the valve is still open. If not so, I would say it looks good. We need to see the “golden cross” (green line outperforming the purple line), nonetheless.

    • Bob Greenyer

      Either crossing or both much higher than calibration.

  • Andreas Moraitis

    I wonder if the valve is still open. If not so, I would say it looks good. We need to see the “golden cross” (green line outperforming the purple line), nonetheless.

    • Bob Greenyer

      Either crossing or both much higher than calibration.

  • Mats002

    According to the live video chat mfmp thinking of aiming for a minimum of 0,25 bar at 500 C internal by boosting input power at max making internal temp go as fast as possible through the Ni Curie temperature which is 353 C internal. This manover will start from about 250 C external, but when will it start? They might change this plan, we’ll see.

    • Mats002

      For the record: They stick to this plan, going down in temp slowly, the boost should be about an hour from now.

  • Mats002

    According to the live video chat mfmp thinking of aiming for a minimum of 0,25 bar at 500 C internal by boosting input power at max making internal temp go as fast as possible through the Ni Curie temperature which is 353 C internal. This manover will start from about 250 C external, but when will it start? They might change this plan, we’ll see.

    • Mats002

      For the record: They stick to this plan, going down in temp slowly, the boost should be about an hour from now.

  • Axil Axil

    The goal of this experiment and LENR engineering in general is the production of metalized hydrogen (AKA Hydrogen Rydberg Matter). This form of hydrogen seems to be the key to supporting the LENR reaction. I have written a post on the ways and means to produce this stuff.

    https://www.lenr-forum.com/forum/index.php/Thread/2717-Some-ideas-for-an-improved-Parkhomov-replication-not-a-replication-thread/?postID=12729#post12729

    • Andreas Moraitis

      Note that „a pressure approximately 1/4 of that required to metalize pure hydrogen itself“ would still be gigantic. Nevertheless, the lithium might help in some way or other.

      • Bob Greenyer

        Yes – not quite the centre of the earth pressures – but still quite absurd.

        • Axil Axil

          Holmlid produces HRM by using a weak laser beam. Never say never.

          • Bob Greenyer

            He does, yes.

      • Axil Axil

        It is likely that not just one method but a number of various methods taken together. For example, electropositive element alloying, the pressure amplification from fractured nickel lattice, and electrostatic stimulation might all work together to get to the parameters required for HRM creation.

        http://www.extremetech.com/wp-content/uploads/2016/01/PVT_3D_diagram-640×718.png

  • Bob Greenyer

    Ok – settling at the low now – will be ready to kick on the fast rise imminently.

  • Bob Greenyer

    Pressure take to a little under 0.5 bar now.

    This is a very controllable set up – so it is over to the crowd to start making suggestions as to what should be done. The first push through the LiH breakdown (900-1000) might be interesting – then cycling to re-do the reversible state might create some nice data.

  • Bob Greenyer

    We seem to have an equilibrium state right now at 500ºC internal (400ºC external) with 5.25psi (0.36bar) – pressure was taken down to around 0.25 bar, but it would appear that some H2 has evolved from the fuel load, which is to be expected if it was at a higher saturation before the pressure bleed.

    Now… taking it to 1100ºC plus seams like a good idea – and some subsequent temperature cycling, other than that, head over to live feed and make suggestions of what you would like to see done.

    • catfish

      Pressure seems to be falling. Is there a reading for the internal temp? I seem to be missing it. `

      • Ged

        Internal temp is inferred from calibrations. The thermal couples can’t last in that hydrogen atmosphere, and the nickel is in the way.

    • Ged

      Hopefully can be done in a slow ramp up to give steady state readings. That and slow is important according to some?

  • Bob Greenyer

    We seem to have an equilibrium state right now at 500ºC internal (400ºC external) with 5.25psi (0.36bar) – pressure was taken down to around 0.25 bar, but it would appear that some H2 has evolved from the fuel load, which is to be expected if it was at a higher saturation before the pressure bleed.

    Now… taking it to 1100ºC plus seams like a good idea – and some subsequent temperature cycling, other than that, head over to live feed and make suggestions of what you would like to see done.

    • Ged

      Hopefully can be done in a slow ramp up to give steady state readings. That and slow is important according to some?

  • catfish

    Pressure seems to be falling. Is there a reading for the internal temp? I seem to be missing it. `

    • Ged

      Internal temp is inferred from calibrations. The thermal couples can’t last in that hydrogen atmosphere, and the nickel is in the way.

  • magicsnd1

    We’re now stepping up the power in discrete steps, for comparison with calibrations. The final target is 900 watts, ~1180 C in the core. The process will take about 2 hours.

  • magicsnd1

    We’re now stepping up the power in discrete steps, for comparison with calibrations. The final target is 900 watts, ~1180 C in the core. The process will take about 2 hours.

  • artefact

    Its glowing 🙂

  • monti

    aaaand its gone 🙁
    the website ( http://magicsound.us/MFMP/video/ ) I mean 😉

    • Bob Greenyer

      it is there…

  • Bob Greenyer

    Great data coming from the GS5.2 Just look at this Chart. Huge and abrupt pressure drop as temperature raised, presumed to be free Lithium taking up a new equilibrium state with Hydrogen. Then raise the temperature again… and the pressure goes back up – then drop it back down again… and… the pressure goes UP!

  • Axil Axil

    There are five factors that might contribute to the formation of hydrogen Rydberg matter (HRM). Each of these five factors contribute to the chances of HRM formation. These five factors are as follows:

    Electropositive catalytic activity (i.e. lithium, potassium, calcium oxide, rare earth oxides), The low work function of this material seems to be important in HRM catalytic activity. This includes graphite (http://arxiv.org/pdf/1501.05056v1.pdf)

    In the Lugano report, there was a coating of rare earths on the nickel fuel particles. This might be related to reducing the work functions of the nickel particles as a result of rare earth oxides in the fuel.

    High pressure produced by flaws in the crystal structure of metal (i.e. nickel)

    Electrostatic field amplification produced by elongated and sharp nanostructures.

    Hexagonal crystal structure that provides a quantum mechanical template for HRM formation.

    A long timeframe – this speaks to the fact that HRM is driven by probability causation similar to radioactive decay.

    Once HRM is formed, it remains active for a long time if it is kept inside the reactor core using containment produced by a magnetic material.

    • Obvious

      Please elaborate on the idea that there are REEs coating the nickel particles.
      How do you come to that conclusion?

      • Axil Axil
        • Obvious

          Looks more like a mostly lithium particle, but I see where your idea comes from. Some of these could be small molecules, for example mass 156. I’m not sure what that would be.
          However, a dash of Pr might make some sense since it does have some properties that could be useful. At least it isn’t very expensive anyways.

          • Axil Axil

            Isotopes

            156Gd 155.922118 (4) 20.47 (9)

            Natural gadolinium is a mixture of seven isotopes, but 17 isotopes of gadolinium are now recognized. Although two of these, 155Gd and 157Gd, have excellent capture characteristics, they are only present naturally in low concentrations. As a result, gadolinium has a very fast burnout rate and has limited use as a nuclear control rod material.

            Properties

            As with other related rare-earth metals, gadolinium is silvery white, has a metallic luster, and is malleable and ductile. At room temperature, gadolinium crystallizes in the hexagonal, close-packed alpha form. Upon heating to 1235°C, alpha gadolinium transforms into the beta form, which has a body-centered cubic structure.

            The metal is relatively stable in dry air, but tarnishes in moist air and forms a loosely adhering oxide film which falls off and exposes more surface to oxidation. The metal reacts slowly with water and is soluble in dilute acid.

            Gadolinium has the highest thermal neutron capture cross-section of any known element (49,000 barns).

            Uses

            Gadolinium yttrium garnets are used in microwave applications and gadolinium compounds are used as phosphors in color television sets.

            The metal has unusual superconductive properties. As little as 1 percent gadolinium improves the workability and resistance of iron, chromium, and related alloys to high temperatures and oxidation.

            Gadolinium ethyl sulfate has extremely low noise characteristics and may find use in duplicating the performance of amplifiers, such as the maser.

            The metal is ferromagnetic. Gadolinium is unique for its high magnetic movement and for its special Curie temperature (above which ferromagnetism vanishes) lying just at room temperature, meaning it could be used as a magnetic component that can sense hot and cold.

          • Obvious

            The 141 and 156 peaks could also be caused by dimethylnaphalines, which may be from the glue.

            https://pubchem.ncbi.nlm.nih.gov/rest/pug_view/data/key/721380_1

          • Axil Axil

            Before analysis, the particle was cleaned of contaminants using sputter cleaning for 180 seconds. The glue residue would have been cleaned off the particle. The heavy material that we are concerned about was welded into the nickel by the fuel preprocessing method. The particle may have been covered in a coating of heavy elements and then sintered by the application of an electric arc which welded the heavy element coating onto the surface of the nickel.

  • Mats002

    It is kind of counter intuitive that the extreme high pressures needed for HRM to form is created in an atmosphere of low pressure, well below 1 atm. The ‘massage’ of H into Ni by heat cycling may produce HRM very locally. Actually – no one knows if HRM is what’s produced here but somehow a NAE (whatever that is) is formed IF we see higher radiation levels and/or excess heat later on in this experiment. So far so good.

  • Bob Greenyer

    The *GlowStick* 5.2 has been temperature cycling whilst Alan sleeps – and it is producing lovely data – as the temperature is dropped the pressure rises and vice versa… quite counter intuitive … is it a reversible reaction going on?

    Pressure trend is upwards, so may bleed and continue like this for a while.

    • US_Citizen71

      I do not know that bleeding it is a good idea or bad idea, but. I just looked at the 12 hour view of the graph on hugnet and it appears that as the pressure has risen the delta between the active and null has narrowed.

      • Bob Greenyer

        It does look like that yes

  • Bob Greenyer

    The gap in the high temperature part of the cycle between the ‘active’ and null is as low as approx. 5ºC now at high power – it was 24ºC last night

  • Bob Greenyer

    The gap in the high temperature part of the cycle between the ‘active’ and null is as low as approx. 5ºC now at high power – it was 24ºC last night

  • artefact

    Crossover 🙂 @12:29:30

    • Bob Greenyer

      Yep even on a 30s average in the high temp part of the cycle.

      The BEAMS HAVE CROSSED on the *GlowStick* 5.2

      What this means is to be determined but right now the trend is encouraging.

      • Mats002

        Well time to ask (again) for a voltage measure point in the middle if the coil that spans over both active and null. What if one half of the coil degrades and that this is the root cause of the temp shift? How to rule out this possibility?

        • Mats002

          Should be “in the middle of the coil”, thanks to my smart phone.

        • Bob Greenyer

          It could be degradation – or one side being hotter, causing marginal relative resistance shift and therefore power dissipation change – despite the overall power being the same and obviously equivalent current through all wire.

          Looking at the voltage over time may be interesting

          • Ged

            I am sure we would have seen that much earlier at the previous 1000 C holds where nothing changed. But power is power, so just gotta look at that and wire resistance per side over time.

            Edit: looking again at the time trace, if power in is static, then it is impossible to be due to changes in the wire between the sides as that demands a ratio change (i.e. Null heating= power/active heating) which we don’t see. All we need is total power in to know what is going on as total temperature (null + active) has gone up for the device, which means more overall power is being emitted.

        • Ged

          Looking at the long time trace… Active is rising but null is not decreasing. If wire sidedness was changing but overall power was the same, then the null would -have to change proportionally- to the active side. There is no way around this as it is a ratio of a known total.

          So, I don’t see evidence of that right now (power in would have to be increasing as -overall- combined temp has gone up). But we need to know the full power budget, then we can observe for sidedness ratio changes.

      • Ged

        This plus the previous plus the celani wire… We definitely have a serious phenomenon going on. Any rule outs shared between the two different setups we could do?

  • Bob Greenyer

    An overview of the cycles.

    • Ged

      Looks like the rising pressure (over all trend, as obviously it drops during heating cycles) may be related to the increasing active side while null stays static. Does suggest something is changing in the active side with the hydrogen loading equilibrium.

      Edit: Extreme speculation, but if there is hydrogen fusion (and not some other cause), then we would expect helium to be being created. Helium would likely not absorb to the same level as hydrogen due to its inert nature, and so pressure would go up over time? Obviously 2 moles hydrogen = 1 mole helium, but if most of the hydrogen is already absorbed and the pressure we see is from a small residual amount–that is pressure is controlled by absorption not moles–then we would predict a pressure increase with helium production. Hmmm. My kingdom for a gas spectrometer.

  • Andreas Moraitis

    I like Alan G’s idea to play with the frequencies of the power supply. An effective method to approach the optimum might be choosing multiples of primes, for example 3/5/7/11 kHz etc. In case that there is some effect, one could use common multiples of the best performing primes, and so on.

    • Andreas Moraitis

      3/7/11/13…kHz would contain the “5”, so maybe 5 kHz could be left out.

    • nietsnie

      I also agree that the crucial missing piece will be found in the frequency of electromagnetism that the fuel is exposed to. Except – I don’t think that frequency will be found in the kilohertz range. I think it will be much higher. This could be identified by combining the output of two frequency sources together to produce harmonics above them. The mathematics of harmonic generation could be used to choose frequencies to attempt by adjusting the frequencies of the two contributing tonic frequency generators to produce harmonics above them that include the desired test frequency. The experiment parameters would be to raise the temperature to a promising range and hold the *power level* there. Then use the two frequency generators together like filters to slowly walk through a large test set of combinations. Hold each one for, say, 3 minutes while measuring the temperature – then advance to the next. Each tonic frequency combination will produce a large set of harmonics with known frequencies. At the end of the test run the combination of temperature change and frequency list for each step can be cross referenced to narrow down the search area to smaller frequency ranges to subsequently test more thoroughly.

      • Andreas Moraitis

        The optimum frequencies might be higher, but with a rectangular waveform you would get anyway enough harmonics. As far as I know, their power supply can provide up to 30 kHz. Much higher frequencies would require special wiring, I guess.

        • nietsnie

          Yes. And if you happened to hit the right frequency and a lot of energy was produced – maybe that’s plenty good enough at this stage. That would certainly feel satisfying to me, at least. Plus, it has the advantage of being able to use the already available equipment. But, at the end, you wouldn’t know how to generally repeat the result – it would be reliant upon the individual power supply – just as Parkamov’s seems to be. My idea would require sine rather than square wave generation. Its advantage would be that the operational frequency could be zeroed in on rather than just hoping a random harmonic reached it. You could ultimately narrow it down to a particular frequency that could be relied upon to get a positive result. At least – that’s how it looks to me up here in the cheap seats.

  • Andreas Moraitis

    I like Alan G’s idea to play with the frequencies of the power supply. An effective method to approach the optimum might be choosing multiples of primes, for example 3/5/7/11 kHz etc. In case that there is some effect, one could use common multiples of the best performing primes, and so on.

    • Andreas Moraitis

      3/7/11/13…kHz would contain the “5”, so maybe 5 kHz could be left out.

    • nietsnie

      I also agree that the crucial missing piece will be found in the frequency of electromagnetism that the fuel is exposed to. Except – I don’t think that frequency will be found in the kilohertz range. I think it will be much higher. This could be identified by combining the output of two frequency sources together to produce harmonics above them. The mathematics of harmonic generation could be used to choose frequencies to attempt by adjusting the frequencies of the two contributing tonic frequency generators to produce harmonics above them that include the desired test frequency. The experiment parameters would be to raise the temperature to a promising range and hold the *power level* there. Then use the two frequency generators together like filters to slowly walk through a large test set of combinations. Hold each one for, say, 3 minutes while measuring the temperature – then advance to the next. Each tonic frequency combination will produce a large set of harmonics with known frequencies. At the end of the test run the combination of temperature change and frequency list for each step can be cross referenced to narrow down the search area to smaller frequency ranges to subsequently test more thoroughly.

      • Andreas Moraitis

        The optimum frequencies might be higher, but with a rectangular waveform you would get anyway enough harmonics. As far as I know, their power supply can provide up to 30 kHz. Much higher frequencies would require special wiring, I guess.

        • nietsnie

          Yes. And if you happened to hit the right frequency and a lot of energy was produced – maybe that’s plenty good enough at this stage. That would certainly feel satisfying to me, at least. Plus, it has the advantage of being able to use the already available equipment. But, at the end, you wouldn’t know how to generally repeat the result – it would be reliant upon the individual power supply – just as Parkamov’s seems to be. My idea would require sine rather than square wave generation. Its advantage would be that the operational frequency could be zeroed in on rather than just hoping a random harmonic reached it. You could ultimately narrow it down to a particular frequency that could be relied upon to get a positive result. At least – that’s how it looks to me up here in the cheap seats.

  • Bob Greenyer

    We stepped up to 1150W on the upper part of the cycle and the ‘active’ is now riding CLEAR above the Null on a 1 minute average.

    • Mats002

      Very interesting and whatever the outcome of this experiment, so far very well performed with new learnings, thanks MFMP!

      Can you explain why null was hotter than active in the first place? Was it due to offset in TC signal calibrations or coil closer to null TC or hotspot on wire near null TC or…? How much temp difference between active and null is needed to be well over signal-to-offset ratio to have something significant?

      • Bob Greenyer

        If you cannot find the answer in the live doc Mats002, would direct that question to Alan on QH – he will give a detailed answer through an open channel in time.

        • Andreas Moraitis

          I made some conservative calculations regarding potential excess energy.

          Number of atoms in 1 g Ni = 1.026 * 10^22
          Number of atoms in 0.15 g LiAlH4 = 1.428 * 10^22
          Number of atoms in 1 mg additional H2 = 5.974 * 10^20

          That makes in total about 2.5 * 10^22 atoms in the fuel.

          Choosing 4 eV per atom as the “chemical limit” we get 2.5 * 10^22 * 4 eV = 16 kJ = 4.45 Wh.

          That is, provided that the reactor walls and steel parts do not react with each other, any excess energy beyond this value could not be ascribed to known chemical reactions. To prove that the readings reflect the released energy correctly will be the difficult part.

          • Ged

            For (humorously useless) scale, it takes about 52 kJ to heat up the average cup of coffee (or tea).

          • Andreas Moraitis

            4.45 Wh sounds little, but it would anyway be enough for some fun: 16 kW when released within one second.

          • Ged

            Give a bit of a pop! That’s like… 21 HP, enough to add wheels and take it for a spin like an RC car for a second.

    • Ged

      I note that the overall pressure trend is still upwards. Very interesting. What could be responsible?

      • Bob Greenyer

        A number have things have been suggested – H2 evolution, H2 reduction of Al2O3 making water vapour, Lithium Vapour…

        • Ged

          The first and last one could explain the periodic ups and downs, but the slow rising trend of the max and min across cycles is what catches my eye. The making of water from Al2O3 could explain that if shown. But, if it was from Al2O3 reduction, we should see it in every single test using the material, so we can easily test that idea by looking back.

          • Bob Greenyer

            Also – Ecco suggests on QH that the SS may absorb and release H2

          • Ged

            That could be contributing to the periodic, and if it was pressure going down as a trend it could contribute to that, but I seriously doubt there’s a way SS absorbing hydrogen could be driving a long term, cycle agnostic, upward trend at this time scale. Hmm.

          • US_Citizen71

            What if the heating makes lithium vapor and when the cooling cycle happens some of the vapor condenses on the top part of the tube and other places the aluminium isn’t. Now there is less lithium available to reverse the reaction and take up the hydrogen so the overall pressure rises.

          • Ged

            It wouldn’t just vaporize again? The device is tight enough to hold hydrogen in the main heated cell, lithium doesn’t stand a chance of sneaking anywhere, and we know heating on this design is rather even from the imaging. It’s possible, but I would think unlikely. But, we can know for sure by seeing if lithium is coating any surfaces outside the active cell and zones of heating.

          • US_Citizen71

            I didn’t mean to imply that the lithium wouldn’t vaporize again. It just would be in contact with the aluminum to form an alloy to absorb the hydrogen.

          • Ged

            True, we shall find out!

          • Ecco

            There’s a small vent hole on one end of the fuel capsule, which means that if Lithium evaporates it could escape from there and react with solid oxides in the cell (for example the mullite ceramic tube or the alumina felt used as a central spacer). This might imply, as you’ve written, that over time there will be less of it available for the reversible hydride reaction.

          • Ecco

            Actually I suggested that the immediate uptake/release of hydrogen at those temperatures after every cycle might have been also due to metals in the cell other than Nickel or Lithium (i.e. the SS capsules and rods), while the long term rise due to possible decomposition of the silica (SiO2) fraction of mullite ceramics under exposure to hydrogen at high temperature. Jones Beene suggested this is possible for Al2O3 too, but apparently it’s true mostly for significantly higher temperatures under atomic hydrogen exposure (so he’s technically correct).

            See:

            Mullite and alumina decomposition in a hydrogen atmosphere
            Excerpt 1
            Excerpt 2

            Wikipedia: Silicon monoxide formation

            Kinetics of silica reduction in hydrogen
            Excerpt 1

            Solubility of Hydrogen in steel 1
            Solubility of Hydrogen in steel 2
            Solubility of Hydrogen in steel 3 (and Nickel)

            The possible Reduction of Alumina to Aluminum Using Hydrogen

  • Bob Greenyer

    We stepped up to 1150W on the upper part of the cycle and the ‘active’ is now riding CLEAR above the Null on a 1 minute average.

    • Mats002

      Very interesting and whatever the outcome of this experiment, so far very well performed with new learnings, thanks MFMP!

      Can you explain why null was hotter than active in the first place? Was it due to offset in TC signal calibrations or coil closer to null TC or hotspot on wire near null TC or…? How much temp difference between active and null is needed to be well over signal-to-offset ratio to have something significant?

      • Bob Greenyer

        If you cannot find the answer in the live doc Mats002, would direct that question to Alan on QH – he will give a detailed answer through an open channel in time.

    • Ged

      I note that the overall pressure trend is still upwards. Very interesting. What could be responsible?

      • Bob Greenyer

        A number have things have been suggested – H2 evolution, H2 reduction of Al2O3 making water vapour, Lithium Vapour…

        • Ged

          The first and last one could explain the periodic ups and downs, but the slow rising trend of the max and min across cycles is what catches my eye. The making of water from Al2O3 could explain that if shown. But, if it was from Al2O3 reduction, we should see it in every single test using the material, so we can easily test that idea by looking back.

          • Bob Greenyer

            Also – Ecco suggests on QH that the SS may absorb and release H2

          • Ged

            That could be contributing to the periodic, and if it was pressure going down as a trend it could contribute to that, but I seriously doubt there’s a way SS absorbing hydrogen could be driving a long term, cycle agnostic, upward trend at this time scale. Hmm.

          • US_Citizen71

            What if the heating makes lithium vapor and when the cooling cycle happens some of the vapor condenses on the top part of the tube and other places the aluminium isn’t. Now there is less lithium available to reverse the reaction and take up the hydrogen so the overall pressure rises.

          • Ged

            It wouldn’t just vaporize again? The device is tight enough to hold hydrogen in the main heated cell, lithium doesn’t stand a chance of sneaking anywhere, and we know heating on this design is rather even from the imaging. It’s possible, but I would think unlikely. But, we can know for sure by seeing if lithium is coating any surfaces outside the active cell and zones of heating.

          • US_Citizen71

            I didn’t mean to imply that the lithium wouldn’t vaporize again. It just would be in contact with the aluminum to form an alloy to absorb the hydrogen. If my suggestion is true lithium may condense on the null side which will be found when the cell is taken apart at the end of the tests.

          • Ged

            True, we shall find out!

          • Ecco

            There’s a small vent hole on one end of the fuel capsule, which means that if Lithium evaporates it could escape from there and react with solid oxides in the cell (for example the mullite ceramic tube or the alumina felt used as a central spacer). This might imply, as you’ve written, that over time there will be less of it available for the reversible hydride reaction.

          • Ecco

            Actually I suggested that the immediate uptake/release of hydrogen at those temperatures after every cycle might have been also due to metals in the cell other than Nickel or Lithium (i.e. the SS capsules and rods), while the long term rise due to possible decomposition of the silica (SiO2) fraction of mullite ceramics under exposure to hydrogen at high temperature. Jones Beene suggested this is possible for Al2O3 too, but apparently it’s true mostly for significantly higher temperatures under atomic hydrogen exposure (so he’s technically correct).

            See:

            Mullite and alumina decomposition in a hydrogen atmosphere
            Excerpt 1
            Excerpt 2

            Wikipedia: Silicon monoxide formation

            Kinetics of silica reduction in hydrogen
            Excerpt 1

            Solubility of Hydrogen in steel 1
            Solubility of Hydrogen in steel 2
            Solubility of Hydrogen in steel 3 (and Nickel)

            The possible Reduction of Alumina to Aluminum Using Hydrogen

  • Andreas Moraitis

    I made some conservative calculations regarding potential excess energy.

    Number of atoms in 1 g Ni = 1.026 * 10^22
    Number of atoms in 0.15 g LiAlH4 = 1.428 * 10^22
    Number of atoms in 1 mg additional H2 = 5.974 * 10^20

    That makes in total about 2.5 * 10^22 atoms in the fuel.

    Choosing 4 eV per atom as the “chemical limit” we get 2.5 * 10^22 * 4 eV = 16 kJ = 4.45 Wh.

    That is, provided that the reactor walls and steel parts do not react with each other, any excess energy beyond this value could not be ascribed to known chemical reactions. To prove that the readings reflect the released energy correctly will be the difficult part.

    • Ged

      For (humorously useless) scale, it takes about 52 kJ to heat up the average cup of coffee (or tea).

      Edit: meanwhile, an Amazon tea candle produces 152 watt hours of energy. So, the available chemical energy in the GS5.2 reactor is very small compared to normal life experiences.

      • Andreas Moraitis

        4.45 Wh sounds little, but it would anyway be enough for some fun: 16 kW when released within one second.

        • Ged

          Give a bit of a pop! That’s like… 21 HP, enough to add wheels and take it for a spin like an RC car for a second.

  • Bob Greenyer

    The cell is taking a breather – being let to tick over at a mid-range temp whilst Alan takes the day off. The cell was vacuumed down and 60psi (4.14bar) of fresh H2 was put in to observe pressure related effects and to see if H2 would be absorbed in some way.

    Higher pressure appears to make the ‘active’ cooler relative to the null – which is an interesting finding and also, as you can see from the attached graph, there is some clear uptake of H2 in some part of the contents of the cell.

    • Ged

      This a very good. 1) this rules out pressure effects in the anomalous heating of the active side, 2) now we still have our slow upward pressure trend mystery, but with a clean cell, we can see if it “resets”.

      Anyways, all my thanks to Alan for his masterful engineering, perseverence, and determination!

    • Stephen

      Just saw a small rise in temperature on both active and null of about 10 degrees from 14:00. Was some activity going on then? Or maybe some exothermic process started at this pressure?

  • Bob Greenyer

    The cell is taking a breather – being let to tick over at a mid-range temp whilst Alan takes the day off. The cell was vacuumed down and 60psi (4.14bar) of fresh H2 was put in to observe pressure related effects and to see if H2 would be absorbed in some way.

    Higher pressure appears to make the ‘active’ cooler relative to the null – which is an interesting finding and also, as you can see from the attached graph, there is some clear uptake of H2 in some part of the contents of the cell.

    • Ged

      This a very good. 1) this rules out pressure effects in the anomalous heating of the active side, 2) now we still have our slow upward pressure trend mystery, but with a clean cell, we can see if it “resets”.

      Anyways, all my thanks to Alan for his masterful engineering, perseverence, and determination!

    • Stephen

      Just saw a small rise in temperature on both active and null of about 10 degrees from 14:00. Was some activity going on then? Or maybe some exothermic process started at this pressure?

  • Bob Greenyer

    We are planning to do a few bumps in power to see what occurs under the new Higher H2 pressure regime.

    • Bob Greenyer

      OK, so the cell is on soak overnight (California) again whilst Alan rests.

      Alan uploaded the Power data over the first part of the run and Ecco made the following chart from it.

      http://i.imgur.com/iilHBk4.png

      • Andreas Moraitis

        Excellent. You should at last appoint Ecco as a regular member of MFMP.
        Could that cycling be done faster, without waiting for complete settling? Testing this for a limited time-span would not disturb the experiment, I hope.

        • Bob Greenyer

          Ecco likes his independence – we are very thankful for his deep and continuing contributions. Of course we would like to see him add his name to the stable, but maybe he feels he is more valuable as an outsider.

          When Alan is up later today – why not suggest it – this is everyones test!

      • Ged

        The stable power is a very good sign. Maybe Alan should do a high temp hold, see if it creeps up without cycling. Maybe a pulse structure of short-short-short-long(3-5x a short)-repeat pattern, to incorperate Andreas’ suggestion.

        • Bob Greenyer

          Alan is up – so get your vote in there!

    • Bob Greenyer

      Key GS5.2 Data so far…

      We still have to determine what is causing the ‘crossover’ at high temperatures.

  • Bob Greenyer

    We are planning to do a few bumps in power to see what occurs under the new Higher H2 pressure regime.

  • Bob Greenyer

    OK, so the cell is on soak overnight (California) again whilst Alan rests.

    Alan uploaded the Power data over the first part of the run and Ecco made the following chart from it.

    http://i.imgur.com/iilHBk4.png

    Using this chart – you can see precisely how the power was raised/controlled over time.

    • Andreas Moraitis

      Excellent. You should at last appoint Ecco as a regular member of MFMP.
      Could that cycling be done faster, without waiting for complete settling? Testing this for a limited time-span would not disturb the experiment, I hope.

      • Bob Greenyer

        Ecco likes his independence – we are very thankful for his deep and continuing contributions. Of course we would like to see him add his name to the stable, but maybe he feels he is more valuable as an outsider.

        When Alan is up later today – why not suggest it – this is everyones test!

    • Ged

      The stable power is a very good sign. Maybe Alan should do a high temp hold, see if it creeps up without cycling. Maybe a pulse structure of short-short-short-long(3-5x a short)-repeat pattern, to incorperate Andreas’ suggestion.

      • Bob Greenyer

        Alan is up – so get your vote in there!

  • Bob Greenyer

    Key GS5.2 Data so far…

    We still have to determine what is causing the ‘crossover’ at high temperatures.

  • Bob Greenyer

    Taking core temperature to 1050ºC now – we are following the time / temp /pressure profile of a Parkhomov published experiment (one in Calorimeter).

    Alan Goldwater has just made this comment about the current performance of the GS5.2

    “T_active is running about 25 degrees above the 600 watt cycles 2 Feb, and about 35 above the de-gassing on 26 Jan.”

    We even have cross-over already.

    • Ged

      Interesting, and divergence is growing, I notice, as of right now.

    • Ged

      Huh, looks like when pressure went back up, the the active became cooler than the null again.

  • Bob Greenyer

    Taking core temperature to 1050ºC now – we are following the time / temp /pressure profile of a Parkhomov published experiment (one in Calorimeter).

    Alan Goldwater has just made this comment about the current performance of the GS5.2

    “T_active is running about 25 degrees above the 600 watt cycles 2 Feb, and about 35 above the de-gassing on 26 Jan.”

    We even have cross-over already.

    • Ged

      Interesting, and divergence is growing, I notice, as of right now.

    • Ged

      Huh, looks like when pressure went back up, the the active became cooler than the null again; that or it flipped back from going back down to 670 C. Such interesting behavior.

  • Sanjeev

    dT during last 12 hours. (Attached)

    • Alain Samoun

      OK guys this is it! 20/25 deg.C. for about 30 minutes No questions it worked! CONGRATULATIONS!
      How many watts produced? In my opinion COP in these conditions has not much meaning as the reactor is not insulated.

      • Sanjeev

        JustaUser commented on chat, and I agree. This is well within error margins. Still very encouraging.

        Fri Feb 5, 10:56:15amJustaUser2: Let
        us be honest about this though … This is only about a 25 C difference
        out of 1000 C, or only a 2 % change; we know that the calorimetry of
        this particular cell may only be accurate to 20 or 10 %, so the temp
        change is still about an order of magnitude below what the calorimeter
        can resolve, without a differential analysis

        • Mats002

          Agree, MFMP show very good engineering and follow protocols according to Piantell, Parkhomov and other sources. But so far no excess heat or radiation signals of significance.

        • Bob Greenyer

          I have to agree that the ‘signal’ so far is not really meaningful.

          Having said that – we have had a number of runs now when the ‘active’ has ultimately run hotter than the null – even when it started trailing – and in the range of temperatures that the effect is claimed to be observed by the likes of Parkhomov. In this experiment, the ‘active’ sat below (one may say equivalent to) until a certain temperature range was entered. As said by Mark, other things may account for this.

          It must also be noted that the ‘active’ if hotter, will raise the temp of the null through H2 driven heat transfer – so the back end calibration is very important.

          This was a Parkhomov temp/time profile but not his Ni – the Russians claimed that the type of Nickel and size is important.

          I am of the mind that the structure of the cells need to change a little, and I will state my case in due course.

          It may have helped greatly if we had a thermal imaging camera on the cell as the ‘active’ looked noticeably brighter at the high temperature ranges and it may be that the whole average temperature of the ‘active’ is measurably higher than the Null.

          The big learning from this experiment so far is the Nickel processing / H2 ad/absorption and pressure effects in various zones in the temperature profile.

          • Sanjeev

            You read my mind !
            I was going to suggest a design change where the null is isolated as much as possible (thermally). Either we can use a long tube with active/null parts at each end and a wall with a tiny hole in the middle or we can use two totally separate reactors connected by suitable plumbing in order to equalize the pressures. The point is to minimize the crosstalk in order to increase the signal.

            Anyway, I think this “treated and degassed” Ni could be a good candidate for flow calorimetry and it should be done now instead of spending time on “Lugano type” reactors.

    • Bob Greenyer

      This graph will be more meaningful with the core temperature on.

      • Sanjeev

        Don’t have the core temperature data, but I can add the external active temperature to it.

  • Sanjeev

    dT during last 12 hours. (Attached)

    • Alain Samoun

      OK guys this is it! 20/25 deg.C. for about 30 minutes No questions it worked! CONGRATULATIONS!
      How many watts produced? In my opinion COP in these conditions has not much meaning as the reactor is not insulated.

      • Sanjeev

        JustaUser commented this on chat, and I agree. This is well within error margins. Still very encouraging.

        Fri Feb 5, 10:56:15am JustaUser2:
        Let us be honest about this though … This is only about a 25 C difference
        out of 1000 C, or only a 2 % change; we know that the calorimetry of
        this particular cell may only be accurate to 20 or 10 %, so the temp
        change is still about an order of magnitude below what the calorimeter
        can resolve, without a differential analysis

        • Mats002

          Agree, MFMP show very good engineering and follow protocols according to Piantell, Parkhomov and other sources. But so far no excess heat or radiation signals of significance.

        • Bob Greenyer

          I have to agree that the ‘signal’ so far is not really meaningful.

          Having said that – we have had a number of runs now when the ‘active’ has ultimately run hotter than the null – even when it started trailing – and in the range of temperatures that the effect is claimed to be observed by the likes of Parkhomov. In this experiment, the ‘active’ sat below (one may say equivalent to) until a certain temperature range was entered. As said by Mark, other things may account for this.

          It must also be noted that the ‘active’ if hotter, will raise the temp of the null through H2 driven heat transfer – so the back end calibration is very important.

          This was a Parkhomov temp/time profile but not his Ni – the Russians claimed that the type of Nickel and size is important.

          I am of the mind that the structure of the cells need to change a little, and I will state my case in due course.

          It may have helped greatly if we had a thermal imaging camera on the cell as the ‘active’ looked noticeably brighter at the high temperature ranges and it may be that the whole average temperature of the ‘active’ is measurably higher than the Null.

          The big learning from this experiment so far is the Nickel processing / H2 ad/absorption and pressure effects in various zones in the temperature profile.

          • Sanjeev

            You read my mind !
            I was going to suggest a design change where the null is isolated as much as possible (thermally). Either we can use a long tube with active/null parts at each end and a wall with a tiny hole in the middle or we can use two totally separate reactors connected by suitable plumbing in order to equalize the pressures. The point is to minimize the crosstalk in order to increase the signal.

            Anyway, I think this “treated and degassed” Ni could be a good candidate for flow calorimetry and it should be done now instead of spending time on “Lugano type” reactors.

    • Bob Greenyer

      This graph will be more meaningful with the core temperature on.

      • Sanjeev

        Don’t have the core temperature data, but I can add the external active temperature to it.

  • Sanjeev

    And this is dT vs Active side temperature (Last 12 hours).
    There is something strange and chaotic going on at high temperature.
    (Attached)

    • Mats002

      Would be interesting to see that same diagram from a dummy run. If the dT is much lower in a dummy run then there is a need to explain the origin of the thermal noice when Li is present (dummy run is without LAH).

      • Sanjeev

        If by dummy run, if you mean the one done with only Ni and H2, then as far as I recall there was no excess, the null was always higher than the active.
        We need another control run with something like Iron powder+H2 to see if this excess reappears.

        • Bob Greenyer

          Control Run will happen tomorrow.

          Alan noted last night that the cell was running hotter at same input levels.

          “Point of reference, T_active is running about 25 degrees above the 600 watt cycles 2 Feb, and about 35 above the de-gassing on 26 Jan.”

          and that is at the lower temps.

          • Stephen

            Are there any particular tests planned today as well or will today be a rest day? Just curious if there will be more to follow 😉

          • Bob Greenyer

            I think Alan will try to take out the ‘fuel’ load and prepare for back end calibration.

          • Stephen

            The backend calibration and fuel analysis will be interesting, looking forward to it. This was a great test a lot of new information to think about.

        • Mats002

          Yes I mean the one with only Ni and H2, how high T did it go to? Can you make the same diagram (dT/T) from that run?

          • Sanjeev

            Actually the temperatures during that run never went higher than a few hundred, so it won’t be comparable. But please see Bob’s comment just below about 25 and 35C excess compared to other runs.

    • Bob Greenyer

      Imagine IF a burst of heat came from one side – there would be a small pressure increase and this would force potentially hotter H2 (Highest heat capacity) into the other side and this would ring between the two.

    • Ged

      Very nice graph, Sanjeev. It shows a lot. A surprisingly large absolute maximum increase of >60 C if one takes the active sides calibration, or original (under the null) behavior (by eyeballing)!

      I am still fighting tooth and nail to get time, but I going to try doing total measured temperature (both sides added together) versus power, compared to calibration. No matter what is going on with the heater, the two sides must sum to the same temperature proportional to the power in, unless there is energy production making heat.

      • Mats002

        Hi Ged, what if the T goes up and down as in exo- and endothermal chemical reactions giving an average temp corresponding to power in?

        Would you say that such a scenario is energy neutral or is it an energy ‘cost’ for driving T up and down over time?

        • Ged

          Aye, that would be be cycling energy, or rather it would be in equilibrium and just oscillating over space, in which case it would be “energy neutral” for our purposes as out=in. Really, it would just be sum total = power in, as long as the ups and downs are actually equal and averaging out (remember that T^4 is proportional to power, so down must always be greater than up). No extra losses though, since energy is already being put in, so entropy is already being increased; that is as long as the material doing the oscillating doesn’t wear out, which would be seen as the oscillations dampening overtime and then ceasing (think like a pendulum given a starting push, where push is the fresh reactants with no wear or tear).

          • Mats002

            Well with that logic if the pendulum T up and down over time on average increase then energy is increasing and that is in fact what the HUG show in this experiment; dT between active and null on average increase. Do you agree?

          • Ged

            The dT of active compared to null is definitely averaged much higher than the calibration. dT was negative, but now positive by 20+ C, so the full change is bigger than just the “positive above null”. What I am going to look at is the sum total of both sides together, or simply active+null. I don’t have all the power data, but that will be the key to determining if there was more apparent, measured power out. Basically, doing what you are saying and looking at the average of the entire system.

          • Mats002

            OK. What if your analysis show an overflow of energy; what is the error margin of this setup? Alan and JustaGuy says 10 – 20% but that is (my understanding) a ballpark figure. How would you go about that?

          • Ged

            We actually have an advantage here we didn’t have before. We have the pulsing experiment, which gives multiple averages I can use at the same temperatures but acting as independent time traces. So, rather than ballparking, I can use statistical analysis. Really, to be proper, I need independent N (that is, an entire new run), but that is not always feasible for awhile. So, the consequence will be that this will be the run testing against itself, and thus vulnerable to internal systematic error. This is true for all experiments in all of science, which is why replication is essential. I could use previous GS data to give me a true higher N, but there is so much data to crunch and so little time.

            But in brief, I would go about it with stats. The “margin of error” is already encoded in statistics as the variance and its square root, standard deviation. So the statistics will tell us on their own what the margin for error is and if there is a significant signal above the noise. No need for ballparks, or football parks, or dog parks here; if I can manage it. These large datasets really challenge my computer.

          • Mats002

            In this case I don’t think a computer is the obstacle. The obstacle is gathering data from all previous GS runs and do the sum that you want.

            Can you describe what statistical analysis you need? Let’s ask MFMP for the data you need and crunch the numbers. What do you suggest?

          • Ged

            There are two tests: peak null versus peak active of experiment – calibration averaged from all technically successful runs (runs where something didn’t physically mess up, or where there is no known source of error), and a longitudinal test for any run that had multiple ramp ups/downs. The former is a simple Student’s t-test (or U-test for non-parametric if not normally distributed in error, but error should be a simple electronic fluctuation in the readings from the thermalcouples which would be normal), and the latter requires an ANOVA. I think only GS5.2 can be longitudinally investigated, which I am setting up for now.

          • Mats002

            I followed you up to the ‘longitudinal test’ – can you explain for a 10 years old please?

          • Ged

            Longitudinal just means over time ;). If you are following the values of some parameter over time, compared to a control parameter over time. This means the data is two dimensional: dimension one is the experimental parameter versus control, and dimension two is the time points from 0 to whenever they stop.

            The temperature pulses done in GS5.2 create very distinct breakpoints which allow me to treat each individual temperature hold as a discrete point of data in time. Since these march along in time, I can then do statistics on the -change- in the active compared to the -change- in the null side over time for each successive breakpoint.

          • Mats002

            How is the second (latter) part different from the first part – peaks of null and active over time?

          • Mats002

            dNull vs dActive over time is the latter? As in
            (Null[1] – Null[0]) / (Active[1] – Active[0])

            ?

          • Ged

            I may do that too ;). Though, if we do it that way, each N will be each time step. Hmm, that is a good idea though, Mats. See, there are so many ways to slice and dice the data.

            That equation idea there will answer a different question statistically though, that question being if the active -changes more than- the null over time. I’m interested in the total heat out versus power in, but I will also do your method as that fixes the problem I was having looking at how the active slowly changes over time while held.

            It’s a different question though, and so will be a different test, if I can manage to wrangle the data for it. No promises with such unwieldy datasets.

          • Mats002

            Time to hit the sack for me. No hurry. I can crunch the numbers if you support with the algoritm. Nighty for now!

          • Ged

            Rest well!

          • Ged

            If we take the average peak temperatures of the different GS runs, say GS3’s peak with GS5’s peak with GS5.2’s peak yesterday, we lose all time data. This is simply an N of 3 for max active side compared against the calibration of that active side. Or better yet, sum of null and active versus power compared with calibration’s sum of null and active versus power. This is flat data, there is no time at all, it’s a single number, like 2000 C/120 W +- 30 C/w versus 1600 C/120 W +- 20 C/W. The t-test will tell us if the means of the experimental runs averaged together compared to the means of the calibrations, given their error, is significantly different. That is, testing if there is a significantly greater amount of heat per Watt in the experimental compared to calibration, with an N of 3 independent GS runs.

            The latter is a single GS run over time, where each time point is the total average (maybe, haven’t completely decided how I want to handle the slow increase in the active side over time during the holds) of the hold temperature between the two breakpoints (ramp up, and ramp down). The N here is each time the same temperature is held, more or less, but really it’s a single experimental run (kinda like how global temperatures for the Earth are just a single run, with an N of 1 since we don’t have another Earth to create another N; but there’s still statistics and computer models and a whole bunch of time course science done on our single Earth N for global temperatures).

            Ideally, with the longitudinal test, I want to compare the sum total active+null/powerin against each successive time point at the same temperature hold (e.g. 3 different 1000 C holds) and against the calibration, where it exists for that temperature hold (thankfully the bookend calibration took it up to 1000 C, it looks like), which is time 0 more or less.

            I can still do this with active versus null, though, but it won’t be as robust a test as divided by power.

          • Mats002

            1. Average ((peak T) / (W at peak T))
            A) You list all GS runs to use
            B) I will give one number as the result +/- n

            2. Same average for the GS5.2 run

            Those two are to be compared – but what’s the error margin here?

          • Ged

            Error is the standard deviation that is calculated from the data. When you take the averages of points, you also take their standard deviation. The statistical tests then evaluate if the mean versus that variance is different enough to be not caused by chance. If you are using excel, you would use stdev on the same data you use average.

            Now, you can either take the single, highest point that each run had once, which is a single temperature reading and could be an unrepresentative outlier, or you could take a window pf points where temps were at the average max, using breakpoints to tell when that max is over (basically, a breakpoint is just where the percent difference between the points of some sliding window rise above a certain threshold. For the GS5.2 data, I have found a suitable threshold of 2% for data points separated by 23 seconds, for determining when the data is undergoing a breakpoint ramp). The trick with a window, is that when you create the average of the peak power, since it is all the data points over a certain time window hold, that average will itself have a standard deviation. Then when you average the averages, one has to take the a complicated sum of the square error/sum of the global mean error to get the right standard deviation for the actual averages of the independent runs.

            … So yes, it is easier to take the single point max, but it’ll be more accurate to take the average maximum to avoid outliers. But don’t worry about that, you can just grab the single maxes if you like, that make for a quick and easy first test :D.

      • Sanjeev

        Looking forward to your graphs.

        • Mats002

          I am thinking of the noice/error margin in this setup. The weakest part seams to be coil and TC physical changes over temp and over temp cycles (time). It would be nice to know the degradation behaviour over T and T(cycle) and what is the worst acceptable physical change? If both sides degrades physically and electrically even, it is acceptable for showing that one side have XH but not acceptable for calculate energy OUT. How much uneven degradation can be acceptable to show XH? Pre- and post reference runs can be used to find the window of degrading but those runs must go all the way to Max T.

          • Ged

            Calibrating on each T level is so important. Really can’t stress that enough. For best accuracy, we need a standard protocol for temperature holds, and both calibration and active runs must follow it. Time can differ for each hold, or even for the speed or ramps, but not the target T holds themselves. That would alleviate a lot of problems.

  • Bob Greenyer

    Contributor RabbitDuck has put together the following Graphs on google graphs

    https://www.google.com/fusiontables/DataSource?docid=1uf-9AcVq3hLvi6rU-XepKEXA8lLCkambWGrK5sJ5#chartnew:id=9

    • Ged

      Huh, what an interesting graph program thingy. Haven’t figured out how to properly interpret it yet, with all these filters.

      • Bob Greenyer

        it is yes.

        • Mats002

          At least one finger is not in a package – great!

      • Sanjeev

        I couldn’t draw any conclusions either.

  • Bob Greenyer

    Contributor RabbitDuck has put together the following Graphs on google graphs

    https://www.google.com/fusiontables/DataSource?docid=1uf-9AcVq3hLvi6rU-XepKEXA8lLCkambWGrK5sJ5#chartnew:id=9

    • Ged

      Huh, what an interesting graph program thingy. Haven’t figured out how to properly interpret it yet, with all these filters.

      • Bob Greenyer

        it is yes.

        • Mats002

          At least one finger is not in a package – great!

      • Sanjeev

        I couldn’t draw any conclusions either.

  • Bob Greenyer

    You can test my logic here using a simple Kirchoffs Law circuit – Free to try in Chrome and on smartphones.

    http://everycircuit.com/

    Essentially, if you start with the same resistance in a serial circuit – you will have the same power dissipated through both resistors, the current will naturally be the same as they are in series and the voltage drop across the two will be the same. So – for two coils at 5 ohms each – and 120V applied – the current will be 12A and the voltage drop 60V – resulting in 720W per side.

    Now the resistance can go up in one of two ways

    1. Wire degrades via oxidation and stress resulting in loss of conductor width – this is largely driven by temperature

    2. Wire is hotter

    So what happens if one side is hotter?

    Firstly, the resistance will increase – but only a little and only when the wire is hotter. The rate of wire degradation (and therefore permanent resistance increase) will accelerate.

    If say the ‘active’ sides resistance went from 5ohms to 5.5 ohms and the passive side went from 5ohms to 5.2 ohms – then the ‘active’ side would now dissipate 11.2A x 61.7V = 691.04W and the passive side 11.2A x 58.3V = 652.96W a whole 38.08W different.

    Now, what we saw during the run is the two sides starting off with the ‘active’ running a good 24ºC cooler than the passive. This could be partially due to some pre-existing skew of resistance to the passive side or differing levels of insulation on the coil or TC.

    However as the run progressed – the spread between the two sides closed and indeed crossed – being more pronounced at higher temperatures. Moreover, the passive side temperature dropped as well as the ‘active’ side rising.

    This leads me to suspect that the ‘active’ sides resistance was was progressively higher than the passive – an effect that would have a positive feedback resulting in more actual power being delivered for the same TOTAL POWER which is what we were fixing in the experiment. As said above, either higher temperatures or wire degradation (rate driven by temperature) would increase the resistance of a side.

    So, if both sides are the same resistance or the passive side is less than the ‘active’ and the wires appear similarly degraded, then there may well have been progressively larger excess heat as less power would be delivered to the ‘active’ side and the insulation did not change. If the ‘active’ sides wire is significantly larger resistance now and it looks like it has visually degraded more – then this may show that it was exposed to more heat in a self feeding loop (of course this may be due to higher insulation / less dissipation on that side overall).

    I don’t know at this stage –

    1. state of each sides wires now

    2. starting resistance of each side

    3. current resistance of each side.

    My suspicion looking at the trend in the data by eye is that the ‘active’ sides resistance AND heat output was increasing during the run – having answers to the Q2 and Q3 above would help determine how much change in differential was coming from extra heat dissipation. Therefore – we must monitor the centre voltage in a run.

    Using a calorimeter, or Induction heating would not suffer these problems – but an induction heater would suffer not being able to be the same temp if the receiver of the induction heating had different properties.

    • Mats002

      Agree and want to add also non-electrical types of degradation: the wire expands with higher temp and that adds stress to the cement out from the GS body. This might cause a tiny gap between the wire and the cement on one side changing the thermal conduction properties. Even worse I think if TC is near to a conduction gap point. I don’t know how much this type of degration can change temp readings. Would be nice to learn.

      • Bob Greenyer

        About the conduction gap – Alan did some work on this before.

        We need to rule out as many alternative explanations as possible – the backend calibration being conducted now will help.

    • Andreas Moraitis

      It was to be expected that the dual GlowStick would not allow very precise measurements. But I think it is anyway good enough to determine under which conditions you can get a distinctive anomaly, let’s say with an apparent COP >=1.5. Precision calorimetry could be done later. The main problem in this setup is that you need to find a recipe that guarantees a high enough COP.

      • Mats002

        Unfortunatly high COP first time that did not happen. So now we are in a situation of trial-and-error which call for finer signals to go on. Need to know error margins better.

        • Bob Greenyer

          From Alan

          “Each side of the heater coil measures 4.8 ohms, and 9.6 ohms end-to-end. These are after subtracting the meter lead resistance of 0.4 ohms. So no difference within the accuracy of the meter (±0.1 ohm)”

          So we know then that the difference in the temperatures was not down to fixed resistance (from imbalance at start) or changing resistance over time (from progressive degradation) leading to different power dissipation on either side.

          It could be due to different thermal conductivity of the surrounding apparatus or ‘active’ vs null fuel load and/or insulative/radiative differences in the two wire sheaths/TC covers.

          My next test though would be of the TCs to see if they have disproportionately changed with respect to each other in the run.

          To do this, I would split the cell in two and then immerse the two halves in a

          1. freezer / ice
          2. boiling water
          3. close together in a chip pan

          and try to determine if they are both reading very close temperatures across these three temperature points – the reason obviously would be to determine if the TCs are reading different temperatures. If they are the same – this will be important. I would suggest that swapping the DAQ over in each test to ensure that there is no influence or bias from that element in the reading.

          • Sanjeev

            That’s a good news.
            However, there is a big lead brick on the side of active (assuming the left one is active), it can reflect more and can cause a higher temperature. This can be easily fixed.

          • Bob Greenyer

            Should act the same in calibrations.

          • Sanjeev

            As far as I recall, the temperature during calibration never reached 1100°C (External). Or did it?

          • Bob Greenyer

            You recall right.

          • Ged

            During the oscillation part of the run, the same temperature was reached multiple times, but active slowly grow hotter each time the same temperature was reached. If the higher active was a physical aspect of reflection or materials of that side of the device, it should be a constant effect once a temperature is reached, rather than something that grows over time. We can definitely rule that out.

          • Sanjeev

            Its strange that it happened in that way. Stranger still, the Active and Null offset has disappeared during (and after) the last run, for all temperatures, high or low.

          • Ged

            I’m still crunching the data in my spare time, so we’ll see if it holds any more secrets.

      • Bob Greenyer

        Yes Andreas – Recipe and maybe stimulation

    • Ged

      Hmm, I don’t know… Looking at the data right now in history form, I don’t really see a decrease in null side in ratio with the active side increase. We’ll see as I analyze.

      Though, the only way for more power to be dissipated without total power changing, is if there were inefficiencies gobbling up from the total power and preventing some fraction from reaching the resistors for dissipation, and that got fixed somehow. Otherwise we must always be in ratio with the power in, that is both sides added together must equal total power.

  • Bob Greenyer

    You can test my logic here using a simple Kirchoffs Law circuit – Free to try in Chrome and on smartphones.

    http://everycircuit.com/

    Essentially, if you start with the same resistance in a serial circuit – you will have the same power dissipated through both resistors, the current will naturally be the same as they are in series and the voltage drop across the two will be the same. So – for two coils at 5 ohms each – and 120V applied – the current will be 12A and the voltage drop 60V – resulting in 720W per side.

    Now the resistance can go up in one of two ways

    1. Wire degrades via oxidation and stress resulting in loss of conductor width – this is largely driven by temperature

    2. Wire is hotter

    So what happens if one side is hotter?

    Firstly, the resistance will increase – but only a little and only when the wire is hotter. The rate of wire degradation (and therefore permanent resistance increase) will accelerate.

    If say the ‘active’ sides resistance went from 5ohms to 5.5 ohms and the passive side went from 5ohms to 5.2 ohms – then the ‘active’ side would now dissipate 11.2A x 61.7V = 691.04W and the passive side 11.2A x 58.3V = 652.96W a whole 38.08W different.

    Now, what we saw during the run is the two sides starting off with the ‘active’ running a good 24ºC cooler than the passive. This could be partially due to some pre-existing skew of resistance to the passive side or differing levels of insulation on the coil or TC.

    However as the run progressed – the spread between the two sides closed and indeed crossed – being more pronounced at higher temperatures. Moreover, the passive side temperature dropped as well as the ‘active’ side rising.

    This leads me to suspect that the ‘active’ sides resistance was was progressively higher than the passive – an effect that would have a positive feedback resulting in more actual power being delivered for the same TOTAL POWER which is what we were fixing in the experiment. As said above, either higher temperatures or wire degradation (rate driven by temperature) would increase the resistance of a side.

    So, if both sides are the same resistance or the passive side is less than the ‘active’ and the wires appear similarly degraded, then there may well have been progressively larger excess heat as less power would be delivered to the ‘active’ side and the insulation did not change. If the ‘active’ sides wire is significantly larger resistance now and it looks like it has visually degraded more – then this may show that it was exposed to more heat in a self feeding loop (of course this may be due to higher insulation / less dissipation on that side overall).

    I don’t know at this stage –

    1. state of each sides wires now

    2. starting resistance of each side

    3. current resistance of each side.

    My suspicion looking at the trend in the data by eye is that the ‘active’ sides resistance AND heat output was increasing during the run – having answers to the Q2 and Q3 above would help determine how much change in differential was coming from extra heat dissipation. Therefore – we must monitor the centre voltage in a run.

    Using a calorimeter, or Induction heating would not suffer these problems – but an induction heater would suffer not being able to be the same temp if the receiver of the induction heating had different properties.

    • Mats002

      Agree and want to add also non-electrical types of degradation: the wire expands with higher temp and that adds stress to the cement out from the GS body. This might cause a tiny gap between the wire and the cement on one side changing the thermal conduction properties. Even worse I think if TC is near to a conduction gap point. I don’t know how much this type of degration can change temp readings. Would be nice to learn.

      • Bob Greenyer

        About the conduction gap – Alan did some work on this before.

        We need to rule out as many alternative explanations as possible – the backend calibration being conducted now will help.

    • Andreas Moraitis

      It was to be expected that the dual GlowStick would not allow very precise measurements. But I think it is anyway good enough to determine under which conditions you can get a distinctive anomaly, let’s say with an apparent COP >=1.5. Precision calorimetry could be done later. The main problem in this setup is that you need to find a recipe that guarantees a high enough COP.

      • Mats002

        Unfortunatly high COP first time that did not happen. So now we are in a situation of trial-and-error which call for finer signals to go on. Need to know error margins better.

      • Bob Greenyer

        Yes Andreas – Recipe and maybe stimulation

    • Ged

      Hmm, I don’t know… Looking at the data right now in history form, I don’t really see a decrease in null side in ratio with the active side increase. We’ll see as I analyze.

      Though, the only way for more power to be dissipated without total power changing, is if there were inefficiencies gobbling up from the total power and preventing some fraction from reaching the resistors for dissipation, and that got fixed somehow. Otherwise we must always be in ratio with the power in, that is both sides added together must equal total power.

  • Bob Greenyer

    From Alan

    “Each side of the heater coil measures 4.8 ohms, and 9.6 ohms end-to-end. These are after subtracting the meter lead resistance of 0.4 ohms. So no difference within the accuracy of the meter (±0.1 ohm)”

    So we know then that the difference in the temperatures was not down to fixed resistance (from imbalance at start) or changing resistance over time (from progressive degradation) leading to different power dissipation on either side.

    It could be due to different thermal conductivity of the surrounding apparatus or ‘active’ vs null fuel load and/or insulative/radiative differences in the two wire sheaths/TC covers.

    My next test though would be of the TCs to see if they have disproportionately changed with respect to each other in the run.

    To do this, I would split the cell in two and then immerse the two halves in a

    1. freezer / ice
    2. boiling water
    3. close together in a chip pan

    and try to determine if they are both reading very close temperatures across these three temperature points – the reason obviously would be to determine if the TCs are reading different temperatures. If they are the same – this will be important. I would suggest that swapping the DAQ over in each test to ensure that there is no influence or bias from that element in the reading.

    • Sanjeev

      That’s a good news.
      However, there is a big lead brick on the side of active (assuming the left one is active), it can reflect more and can cause a higher temperature. This can be easily fixed.

      • Bob Greenyer

        Should act the same in calibrations.

        • Sanjeev

          As far as I recall, the temperature during calibration never reached 1100°C (External). Or did it?

          • Bob Greenyer

            You recall right.

      • Ged

        During the oscillation part of the run, the same temperature was reached multiple times, but active slowly grew hotter each time the same temperature was reached. If the higher active was a physical aspect of reflection or materials of that side of the device, it should be a constant effect once a temperature is reached, rather than something that grows over time. We can definitely rule that out.

        • Sanjeev

          Its strange that it happened in that way. Stranger still, the Active and Null offset has disappeared during (and after) the last run, for all temperatures, high or low.

          • Ged

            I’m still crunching the data in my spare time, so we’ll see if it holds any more secrets.

  • Sanjeev

    dT plotted for the last run in vacuum and Ar.(if that’s correct).
    Its somewhat lower than the run in H2, broadly speaking.
    Attached.

    • Sanjeev

      I suggest increasing the fuel quantity to 10 g for future experiments.
      Probably 1 g is too tiny to produce any detectable signal.

      • Mats002

        Good idea. MY (yes that MY) proposed in a serious post to MFMP about the Celani wire experiment to increase the number of wires to get a higher (or not) signal. It is not easy to do that because Celani wires us a scarse and expensive thing. In the GS experiments though it is not so hard to add the ratio between ‘active’ substances versus apparatus bulk overhead.

        • Sanjeev

          I think its too simplistic for me to assume that if 1g can cause an excess of 10C, 10g could cause 100C, but I expect at least some improvement (if there is any excess at all).
          Second thing is to try RF pulses with high temperatures in the same GS.

          About Celani wires, I never understood his reluctance to supply more wires or to even use more of them in his own experiments. These are not expensive compared to the cost of equipment. (probably costs a few cents per wire). With such a tiny amount of mass, its like finding a needle in the haystack, a waste of valuable time. But its Celani’s IP, so I have no rights to ask for anything.

          • Mats002

            That is precisly why open science as per MFMP is the only way to nail down this anomaly whatever the outcome will be.

      • US_Citizen71

        I think before moving on to a new design and all the complications that can come with that, it would be a good idea to do a run with water calorimetry. The current design is robust, there is enough data to likely repeat the results of the last run with the same anomalies. The design could be easily slid into a steel pipe that is running through a container of water. Forming a ring out of castable alumina like that which is used on the coil on either end of the GS could keep it centered in the pipe. The container could be anything from a large metal pot to a metal trashcan with a steel pipe running horizontally through the sides a short distance from the bottom of the container. Seal the junction between the pipe and the container with something like JB Weld, add insulation and a float valve with a line running to an external graduated tank to keep the water level stable and you have a calorimeter. Then it is just a matter of doing two identical runs for time and power, one fueled one not.

        • Sanjeev

          I guess MFMP is waiting for something significant to show up before going to calorimetry.
          But I agree, there is no harm in trying a simple Parkhomov type calorimetry in parallel. Bob Higgins is building one for this purpose, I have no idea how long it will take.

          • US_Citizen71

            In my personal opinion that last test run showed enough of a significant anomaly to warrant verifying the the thermal output with more sensitive means, but it is not my project. I’d even be willing to donate funds toward building what I described above. Any willingness from MFMP to go that route?

          • Bob Greenyer

            I am actually quite keen to do it now. Let’s see if the TCs come back as equivalent to each other.

          • US_Citizen71

            Sounds like a plan, when does Alan think he will be able to test the TCs?

          • US_Citizen71

            I think it is a good idea to move forward with an easy to build calorimeter to further your project’s goals. The design of your test reactors are not going to deviate greatly from your current GS series for awhile as far as I can tell, so I believe it is time to start collecting calorimetry data as it will be easier for the general public to understand. The difference between putting in 25 kWh of electricity and evaporating X liters of water on a null run and putting 25 kWh of electricity and evaporating X + Y liters of water on a fueled run is easy to understand.

            Also the calorimeter would provide a nice stable thermal environment, which should help reduce noise in the temperature measurements. It should end the swing from air currents from opening doors and people walking around completely. It will likely allow you to go to higher temperatures too since it should be more insulated. I sent you a bit of support to help you get there.

            One suggestion making the body out of something standardized like the below would help reproducibility.

            http://www.bayteccontainers.com/3-gallon-standard-5-gallon-open-head-steel-pails-covers-.html#gsc.tab=0

          • Bob Greenyer

            Thanks US – and you make good points

    • Ged

      Excellent, this run is a very helpful calibration; and a very nice graph :D.

      • Bob Greenyer

        It is a nice graph – it is a shame we could not remove the fuel cell – since we effectively cannot do a ‘no fuel’ back end calibration.

        • Ged

          Aye, that would be perfect. Particularly as if one approaches this with the assumption that there is amino Lois hear successfully sparked, we don’t what it takes to stop it for certain, thus potentially font animating null and book end calibrations. There may yet be ways to test that in the data we have and tease it apart. We’ll see.

  • Sanjeev

    dT plotted for the last run in vacuum and Ar.(if that’s correct).
    Its somewhat lower than the run in H2, broadly speaking.
    Attached.

    • Ged

      Excellent, this run is a very helpful calibration; and a very nice graph :D.

      • Bob Greenyer

        It is a nice graph – it is a shame we could not remove the fuel cell – since we effectively cannot do a ‘no fuel’ back end calibration.

        • Ged

          Aye, that would be perfect. Particularly as if one approaches this with the assumption that there is amino Lois hear successfully sparked, we don’t what it takes to stop it for certain, thus potentially font animating null and book end calibrations. There may yet be ways to test that in the data we have and tease it apart. We’ll see.

  • Sanjeev

    I suggest increasing the fuel quantity to 10 g for future experiments.
    Probably 1 g is too tiny to produce any detectable signal.

    • Mats002

      Good idea. MY (yes that MY) proposed in a serious post to MFMP about the Celani wire experiment to increase the number of wires to get a higher (or not) signal. It is not easy to do that because Celani wires us a scarse and expensive thing. In the GS experiments though it is not so hard to add the ratio between ‘active’ substances versus apparatus bulk overhead.

      • Sanjeev

        I think its too simplistic for me to assume that if 1g can cause an excess of 10C, 10g could cause 100C, but I expect at least some improvement (if there is any excess at all).
        Second thing is to try RF pulses with high temperatures in the same GS.

        About Celani wires, I never understood his reluctance to supply more wires or to even use more of them in his own experiments. These are not expensive compared to the cost of equipment. (probably costs a few cents per wire). With such a tiny amount of mass, its like finding a needle in the haystack, a waste of valuable time. But its Celani’s IP, so I have no rights to ask for anything.

        • Mats002

          That is precisly why open science as per MFMP is the only way to nail down this anomaly whatever the outcome will be.

    • US_Citizen71

      I think before moving on to a new design and all the complications that can come with that, it would be a good idea to do a run with water calorimetry. The current design is robust, there is enough data to likely repeat the results of the last run with the same anomalies. The design could be easily slid into a steel pipe that is running through a container of water. Forming a ring out of castable alumina like that which is used on the coil on either end of the GS could keep it centered in the pipe. The container could be anything from a large metal pot to a metal trashcan with a steel pipe running horizontally through the sides a short distance from the bottom of the container. Seal the junction between the pipe and the container with something like JB Weld, add insulation and a float valve with a line running to an external graduated tank to keep the water level stable and you have a calorimeter. Then it is just a matter of doing two identical runs for time and power, one fueled one not.

      edit: Adding some alumina or glass wool near the ends to minimize airflow through the pipe would likely be a good idea as well. A short calibration run to determine power to temperature levels will be needed too, since the enclosed setup will be more insulated than one surrounded by nothing but air. So the power needed to reach and maintain a given temperature will likely be less.

      • Sanjeev

        I guess MFMP is waiting for something significant to show up before going to calorimetry.
        But I agree, there is no harm in trying a simple Parkhomov type calorimetry in parallel. Bob Higgins is building one for this purpose, I have no idea how long it will take.

        • US_Citizen71

          In my personal opinion that last test run showed enough of a significant anomaly to warrant verifying the the thermal output with more sensitive means, but it is not my project. I’d even be willing to donate funds toward building what I described above. Any willingness from MFMP to go that route?

          • Bob Greenyer

            I am actually quite keen to do it now. Let’s see if the TCs come back as equivalent to each other.

          • US_Citizen71

            Sounds like a plan, when does Alan think he will be able to test the TCs?

          • US_Citizen71

            I think it is a good idea to move forward with an easy to build calorimeter to further your project’s goals. The design of your test reactors are not going to deviate greatly from your current GS series for awhile as far as I can tell, so I believe it is time to start collecting calorimetry data as it will be easier for the general public to understand. The difference between putting in 25 kWh of electricity and evaporating X liters of water on a null run and putting 25 kWh of electricity and evaporating X + Y liters of water on a fueled run is easy to understand.

            Also the calorimeter would provide a nice stable thermal environment, which should help reduce noise in the temperature measurements. It should end the swing from air currents from opening doors and people walking around completely. It will likely allow you to go to higher temperatures too since it should be more insulated. I sent you a bit of support to help you get there.

            One suggestion making the body out of something standardized like the below would help reproducibility.

            http://www.bayteccontainers.com/3-gallon-standard-5-gallon-open-head-steel-pails-covers-.html#gsc.tab=0

          • Bob Greenyer

            Thanks US – and you make good points

  • Sanjeev

    Last 12 hours destruction test.
    Looks like a lot of noise in Active side TC. What ever excess was seen in fueled run could be just noise.

  • Sanjeev

    Last 12 hours destruction test.
    Looks like a lot of noise in Active side TC. What ever excess was seen in fueled run could be just noise.