Jump to content
Xtreme .Net Talk

Rockoon

Members
  • Posts

    8
  • Joined

  • Last visited

Rockoon's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Indeed... http://en.wikipedia.org/wiki/Public-key_cryptography
  2. Re: Patents are effective Which bad decision is that? Using an intermediate language is actualy a good thing from where I stand. My latest project, which juggles 64-bit values, does so using the 64-bit general purpose registers on 64-bit machines, but will still run on 32-bit machines without a rebuild. I cannot honestly say that there is a performance disadvantage either since several of my routines (the important ones, infact) are pushing ~2.5 instructions per clock cycle which is very near the impossible-to-reach limit of 3 per cycle on AMD64's I believe that you are simply wishing that the .NET languages were something that they weren't intended to be. Yes, it sucks if you have been a VB programmer who is now left without an equivilent upgrade path that produces machine code binaries .. but Microsoft had never promised one to you.
  3. Re: Patents are effective Patents only prevent the big fish from stealing your algorithms from you. What can you do against the small fish? You can't take them to court because its not economically viable, and even detecting that they have violated your patent takes time and effort to reverse engineer THEIR program. Filing a patent for your algorithm is equivilent to handing it over for free to all the small fish in the world. The best way to keep the small fish from using your algorithm is to NOT patent it. Its a catch-22. If you are serious about protecting your intellectual property then don't impliment it in a managed language, dont patent it, and make sure that when you are doing business that the contract lets you rape them sour if you catch them breaking it. Contract law is a much better place to be.
  4. Concept encapsulation doesnt have to be equal to hidden information.. Programming languages represent abstract machines.. this includes Ada, Basic, C, Delphi, Forth, Java, Lisp, Pascal, Small Talk, etc.. they all have represented concept encapsulation without hidden information in the past. C++ compiler will produce code "5 to 7 times faster" ??? Maybe in very specific examples with worst case code vs best case code.. most probably due to *hidden information* not indicating that you were infact jamming one of the worst cases through the language/compiler/framework that you were using.. The big micro-optimisations these days are typically (1) memory cache related or (2) those cases where ASM offers abilities not represented by an abstract language.. operations like rotate through carry which simply can't be coerced out of any high level language that I am aware of .. (3) when the compiler consistently makes a bad choice such as GCC's habitual conversion of constant multiplication's to a series of shift and add's (small gains vs big losses) .. but this is a special case like everything else.. compiler-specific not language-specific. .. so no, C++ isnt going to give you 5 to 7 times more bang for your buck over the long haul.. more like 1.2 times at the most... these are all abstract languages and if its being done with one up-to-date compiler to good effect, its also being done with the other up-to-date ones (less the special cases)
  5. There isnt one... ...well... not technically... although I suppose you could play with the display gamma...
  6. "Best" isnt a stated requirement.. but such issues can have very large effect on performance (I'm not talking about small performance increases here, I'm talking about large ones... still not approaching "best") You know.. i've heard this sort of "reasoning" for several decades.. it wasnt true in 1986, it wasn't true in 1996, it isn't true in 2006, and it wont be true in 2016. If your programs have no time-critical code then good for you. Some of us bang our head against the performance wall regardless of how fast our processor is... because we desire to bang our head against it! A faster processor simply means that we can do more, and we fully plan to do so. I've been hearing this for decades too.. it used to be said about C, then about C++, always about the various Basic and Pascal compilers (incorrectly), and so forth... BTW: .Net isnt a language
  7. The windows mixer API is apparently an evil obfuscated poorly documented construct designed to make programmers kick their neighbors dog. Seriously. Not really offering help for direct mixer API here.. but a word of caution. Resist the urge to kick. But someone HAS ported a lot of the mixer functionality to .NET: http://www.codeproject.com/useritems/AudioLib.asp?msg=1467837
  8. As a highly versed x86 assembly language programmer... The problem with "hidden" information is that often that information is useful knowledge for making (or avoiding!) algorithmic changes that would alter genuine real world performance. As most decent programmers know, the big optimisations are algorithmic changes rather than code reordering/inlining/whatever tweaks. Is that list iterator on that class in the class library thrashing the L1 cache or not? (basically, how local is the auxiliary data needed for iteration management .. can't know.. its hidden from me.. a derived class also might behave differently) Thats important info that could easily help me avoid trying multiple strategies (ie, "one item at a time, many operations" vs "one operation at a time, many items" vs "neither is acceptable, must get to rethinking the overall plan") of course.. none of this means anything unless performance is an issue.. but I think that more often than not, most GOOD programmers are slamming their head against the performance wall frequently (because that sort of experience is PRECISELY what made them good to begin with!)
×
×
  • Create New...